| Name of the Table | Source |
|---|---|
| D1_1_SDG | dashboards.sdgindex.org |
| D2_2_Unemployment_rate | ilo.org |
| D3_0_GDP_per_capita | data.worldbank.org |
| D3_1_Military_expenditure_percent_GDP | data.worldbank.org |
| D3_2_Military_expenditure_percent_gov_exp | data.worldbank.org |
| D4_0_Internet_usage | ourworldindata.org |
| D5_0_Human_freedom_index | cato.org |
| D6_0_Disaters | kaggle.com |
| D7_0_COVID | github.com |
| D8_0_Conflicts | datacatalog.worldbank.org |
Comparative Analysis of SDG Implementation Evolution Worldwide
1 Introduction
1.1 Overview and Motivation
The global significance of the SDGs is our basis. The adoption of the SDGs by the United Nation in 2015 marked a significant global commitment to address pressing issues such as poverty, inequality, climate, change, and more. The fact that these goals were unanimously adopted by 193 member states underscores their importance. This prompted us to ask ourselves, can we evaluate the progress? What has really been done so far? Although the SDGs have attracted considerable attention and backing, it is essential to evaluate the events preceding and following their implementation. Understanding the actions taken and progress made is essential in determining if these global commitments are resulting in tangible enhancements to individuals’ lives. By examining the evolution of all countries and their respective contributions towards achieving the SDGs, we can develop a comprehensive understanding of collective efforts and identify potential disparities or gaps.
1.3 Research questions
Focus on factors: What can explain the state of the countries regarding sustainable development? (we will analyse different factors: scores from the human freedom index, GDP per capita, military expenditures in % of GDP/government expenditure, unemployment rate, internet usage). See data description for more precise information about the factors.
Focus on time: How has the adoption of the SDGs in 2015 influenced the achievement of SDGs? (we want to compare the achievement (SDG scores: there are scores calculated even before the adoption) of the different countries before and after 2015 to see if the adoption of SDG gave a real “push” to sustainable development)
Focus on events: Is the evolution in sustainable development influenced by uncontrollable events, such as economic crisis, health crises and natural disasters? (we will analyse the impact of the COVID, natural disasters and conflicts (# deaths, damages, etc.) on the SDG scores). See data description for more precise information about how the impact of these events are materialized into data.
Focus on relationship between SDGs: How are the different SDGs linked? (We want to see if some SDGs are linked in the fact that a high score on one implies a high score on the other, and thus if we can make groups of SDGs that are comparable in that way).
2 Data
2.1 Sources
We are collecting our Data from the sustainability development report (SDG), the international labour organization (ILOSTAT), the World Bank, Our world in data, the CATO institute, one from Kaggle (disasters: we couldn’t find relevant accessible information from somewhere else) and GitHub. We found different datasets containing useful information in relation with the SDGs. The details about these data and the links are presented in the next section. With the help of the kableExtra package we present below the list of our sources and the links to each one:
2.2 Description
During the wrangling process: we add data to our table (D1_1_SDG) based on different other datasets and match them based on the country, the country code, and the year. The tables below show all the variables present in our 9 databases that we then merge to have our final table for the analysis, as well as each variable of interest that we keep.
D1_1_SDG
Our first database used is our main one. It concerns the SDG. Find below a table that summarize the variables present:
| Variable Name | Explanation |
|---|---|
| code | Country code (ISO) |
| country | Country name |
| year | Year of the observation (2000-2022) |
| overallscore | Overall score on all 17 SDGs (the score are % of achievement of the goals determined by the UN based on several indicators) |
| goal1:goal17 | Score on each SDG except SDG 14 (16 variables) |
| population | Population of the country |
D2_2_Unemployment_rate
| Variable Name | Explanation |
|---|---|
| code | Country code (ISO) |
| country | Country name |
| year | Year of the observation (2000-2022) |
| unemployment.rate | Unemployment rate (% of the population 15 years old and older) |
D3_0_GDP_per_capita
| Variable Name | Explanation |
|---|---|
| code | Country code (ISO) |
| country | Country name |
| year | Year of the observation (2000-2022) |
| GDPpercapita | GDP per capita |
D3_1_Military_expenditure_percent_GDP
| Variable Name | Explanation |
|---|---|
| code | Country code (ISO) |
| country | Country name |
| year | Year of the observation (2000-2022) |
| MilitaryExpenditurePercentGDP | Military expenditures in percentage of GDP |
D4_0_Internet_usage
| Variable Name | Explanation |
|---|---|
| code | Country code (ISO) |
| country | Country name |
| year | Year of the observation (2000-2022) |
| internet.usage | Internet usage (% of the population) |
D5_0_Human_freedom_index
| Variable Name | Explanation |
|---|---|
| code | Country code (ISO) |
| country | Country name |
| year | Year of the observation (2000-2022) |
| region | Part of the world, group of countries (e.g. Eastern Europe, Dub-Saharan Africa, South Asia, etc.) |
| pf_law | Rule of law, mean score of: Procedural justice, Civil, justice, Criminal justice, Rule of law (V-Dem) |
| pf_security | Security and safety, mean score of: Homicide, Disappearances conflicts, terrorism |
| pf_movement | Freedom of movement (V-Dem), Freedom of movement (CLD) |
| pf_religion | Freedom of religion, Religious organization, repression |
| pf_assembly | Civil society entry and exit, Freedom of assembly, Freedom to form/run political parties, Civil society repression |
| pf_expression | Direct attacks on the press, Media and expression (V-Dem), Media and expression (Freedom House), Media and expression (BTI), Media and expression (CLD) |
| pf_identity | Same-sex relationships, Divorce, Inheritance rights, Female genital mutilation |
| ef_gouvernment | Government consumption, Transfers and subsidies, Government investment, Top marginal tax rate, State ownership of assets |
| ef_legal | Judicial independence, Impartial courts, Protection of property rights, Military interference Integrity of the legal system Legal enforcementof contracts, Regulatory costs, Reliability of police |
| ef_money | Money growth, Standard deviation of inflation, Inflation: Most recent year, Freedom to own foreign currency |
| ef_trade | Tariffs, Regulatory trade barriers, Black-market exchange rates, Movement of capital and people |
| ef_regulation | Credit market regulations, Labor market regulations, Business regulations |
D6_0_Disaters
| Variable Name | Explanation |
|---|---|
| code | Country code (ISO) |
| country | Country name |
| year | Year of the observation (2000-2022) |
| continent | Continents touched by the disasters such as floods, ouragan |
| total_deaths | Number of deaths caused by disasters |
| no_injured | Number of injured caused by disasters |
| no_affected | Number of affected caused by disasters |
| no_homeless | Number of homeless caused by disasters |
| total_affected | Total number of affected caused by disasters |
| total_damages | Total of infrastructure damages |
D7_0_COVID
| Variable Name | Explanation |
|---|---|
| code | Country code (ISO) |
| country | Country name |
| year | Year of the observation (2000-2022) |
| deaths_per_million | Number of people dead due to COVID |
| cases_per_million | Number of COVID cases |
| stringency | Government Response Stringency Index: composite measure based on 9 response indicators including school closures, workplace closures, and trave |
D8_0_Conflicts
| Variable Name | Explanation |
|---|---|
| code | Country code (ISO) |
| country | Country name |
| year | Year of the observation (2000-2022) |
| ongoing | Variable coded 1 for more than 25 deaths in intrastate conflict and 0 otherwise according to UCDP/PRIO Armed Conflict Dataset 17.1. |
| sum_deaths | Best estimate of deaths in all categories of violence (non-state, one-sided and state-based) recorded by the Uppsala Conflict Data Program in the country based on the UCDP GED dataset (unpublished 2016 data). The location of these events is used for estimating the extent of violence. |
| pop_affected | Share of population affected by violence in percentage (0 to 100) measured as described above based on population data from CIESIN, the PRIO-GRID structure as well as UCDP GED. |
| area_affected | Area affected by conflict |
| maxintensity | Two different intensity levels are coded: minor armed conflicts (1) and wars (2), Takes the max intensity of conflict in the country so that it is coded 2 if there is at least one war (>=1000 deaths in intrastate conflict) during the year. Data from UCDP/PRIO Armed Conflict Dataset 17.1. |
2.3 Wrangling/cleaning
To accommodate the large scale of the datasets we intended to utilize, we decided to pre-clean each of our datasets before merging them. This allowed us to simplify the process of cleaning our final dataset afterwards.
2.3.1 Dataset on SDG
This is our main dataset, that we clean in order to keep the columns containing the following information: country name, country code, year, population, overall score and the SDGs scores.
We begin by importing the data and transforming it into a dataframe. We rename the columns and transform the scores into numeric variables.
Code
D1_0_SDG <- read.csv(here("scripts","data","SDG.csv"), sep = ";")
D1_0_SDG <- as.data.frame(D1_0_SDG)
D1_0_SDG <- D1_0_SDG[,1:22]
colnames(D1_0_SDG) <- c("code", "country", "year", "population", "overallscore", "goal1", "goal2", "goal3", "goal4", "goal5", "goal6", "goal7", "goal8", "goal9", "goal10", "goal11", "goal12", "goal13", "goal14", "goal15", "goal16", "goal17")
D1_0_SDG[["overallscore"]] <- as.double(gsub(",", ".", D1_0_SDG[["overallscore"]]))
makenumSDG <- function(D1_0_SDG) {
for (i in 1:17) {
varname <- paste("goal", i, sep = "")
D1_0_SDG[[varname]] <- as.double(gsub(",", ".", D1_0_SDG[[varname]]))
}
return(D1_0_SDG)
}
D1_0_SDG <- makenumSDG(D1_0_SDG)We continue by inspecting the missing values.
Code
propmissing <- numeric(length(D1_0_SDG))
for (i in 1:length(D1_0_SDG)){
proportion <- mean(is.na(D1_0_SDG[[i]]))
propmissing[i] <- proportion
}
propmissing
#> [1] 0.0000 0.0000 0.0000 0.0778 0.0000 0.0833 0.0000 0.0000 0.0000
#> [10] 0.0000 0.0000 0.0000 0.0000 0.0000 0.0944 0.0000 0.0000 0.0000
#> [19] 0.2278 0.0000 0.0000 0.0000Seeing that population has a lot of NAs, we investigate and find out that it is normal to have missing values, because some of the observations are not countries but regions, so we can drop these observations.
Code
SDG0 <- D1_0_SDG |>
group_by(code) |>
select(population) |>
summarize(NaPop = mean(is.na(population))) |>
filter(NaPop != 0)
print(SDG0, n = 180)
#> # A tibble: 14 x 2
#> code NaPop
#> <chr> <dbl>
#> 1 _Africa 1
#> 2 _E_Euro_Asia 1
#> 3 _E_S_Asia 1
#> 4 _HIC 1
#> 5 _LAC 1
#> 6 _LIC 1
#> 7 _LIC_LMIC 1
#> 8 _LMIC 1
#> 9 _MENA 1
#> 10 _OECD 1
#> 11 _Oceania 1
#> 12 _SIDS 1
#> 13 _UMIC 1
#> 14 _World 1
D1_0_SDG <- D1_0_SDG %>%
filter(!str_detect(code, "^_"))Now there is no more missing values in the variable population and we see that we have information on 166 countries.
Code
(country_number <- length(unique(D1_0_SDG$country)))
#> [1] 166We see that there are only NAs in 3 SDG scores: 1, 10 and 14 and that when there are NAs for a country, it is on all years or none. We decide to run ore investigations of those 3 SDG scores to decide if we keep them or not for the analysis.
Code
SDG1 <- D1_0_SDG |>
group_by(code) |>
select(contains("goal")) |>
summarize(Na1 = mean(is.na(goal1)),
Na2 = mean(is.na(goal2)),
Na3 = mean(is.na(goal3)),
Na4 = mean(is.na(goal4)),
Na5 = mean(is.na(goal5)),
Na6 = mean(is.na(goal6)),
Na7 = mean(is.na(goal7)),
Na8 = mean(is.na(goal8)),
Na9 = mean(is.na(goal9)),
Na10 = mean(is.na(goal10)),
Na11 = mean(is.na(goal11)),
Na12 = mean(is.na(goal12)),
Na13 = mean(is.na(goal13)),
Na14 = mean(is.na(goal14)),
Na15 = mean(is.na(goal15)),
Na16 = mean(is.na(goal16)),
Na17 = mean(is.na(goal17))) |>
filter(Na1 != 0 | Na2 != 0 | Na3 != 0| Na4 != 0| Na5 != 0| Na6 != 0| Na7 != 0| Na8 != 0| Na9 != 0| Na10 != 0| Na11 != 0| Na12 != 0| Na13 != 0| Na14 != 0| Na15 != 0| Na16 != 0| Na17 != 0)
kable(for (col in names(SDG1)[-1]) {
print(paste(col, "count:", sum(SDG1[[col]] != 0)))
})
#> [1] "Na1 count: 15"
#> [1] "Na2 count: 0"
#> [1] "Na3 count: 0"
#> [1] "Na4 count: 0"
#> [1] "Na5 count: 0"
#> [1] "Na6 count: 0"
#> [1] "Na7 count: 0"
#> [1] "Na8 count: 0"
#> [1] "Na9 count: 0"
#> [1] "Na10 count: 17"
#> [1] "Na11 count: 0"
#> [1] "Na12 count: 0"
#> [1] "Na13 count: 0"
#> [1] "Na14 count: 40"
#> [1] "Na15 count: 0"
#> [1] "Na16 count: 0"
#> [1] "Na17 count: 0"For goal 1, there are only 9.04% missing values in 15 different countries. Goal 1 being “end poverty”, we decide to keep it and only remove the countries with no information for the analysis.
Code
SDG2 <- D1_0_SDG |>
group_by(code) |>
select(contains("goal")) |>
summarize(Na1 = mean(is.na(goal1))) |>
filter(Na1 != 0)
print(table(SDG2$Na1))
#>
#> 1
#> 15
length(unique(SDG2$code))/country_number
#> [1] 0.0904For goal 10, there are only 10.2% missing values in 17 different countries. Goal 10 being “reduced inequalities”, we decide to keep it and only remove the countries with no information for the analysis.
Code
SDG3 <- D1_0_SDG |>
group_by(code) |>
select(contains("goal")) |>
summarize(Na10 = mean(is.na(goal10))) |>
filter(Na10 != 0)
print(table(SDG3$Na10))
#>
#> 1
#> 17
length(unique(SDG3$code))/country_number
#> [1] 0.102For goal 14, there are 24.1% missing values in 40 different countries. Goal 14 being “life under water”, we decide not to keep it, because other SDG such as “life on earth” and “clean water” already treat similar subjects.
Code
SDG4 <- D1_0_SDG |>
group_by(code) |>
select(contains("goal")) |>
summarize(Na14 = mean(is.na(goal14))) |>
filter(Na14 != 0)
print(table(SDG4$Na14))
#>
#> 1
#> 40
length(unique(SDG4$code))/country_number
#> [1] 0.241
D1_0_SDG <- D1_0_SDG %>% select(-goal14)We will be working with different datasets and merge them based on the country code and the year. To make sure the match will work well, we verify that the country names are encoded in UTF-8 format, then we standardize the name of the countries (we needed to make a custom matrch for Turkey) and the country codes using the countrycode library. In addition, we create a list of all the country codes contained in the main database in order to filter the other databases. Finally, we complete the database to make sure all the combinations of “country, year” are in the database. The number of rows isn’t changed.
Code
D1_0_SDG$country <- stri_encode(D1_0_SDG$country, to = "UTF-8")
D1_0_SDG <- D1_0_SDG %>%
mutate(country = countrycode(country, "country.name", "country.name", custom_match = c("T�rkiye"="Turkey")))
D1_0_SDG$code <- countrycode(
sourcevar = D1_0_SDG$code,
origin = "iso3c",
destination = "iso3c",
)
list_country <- c(unique(D1_0_SDG$code))
D1_0_SDG_country_list <- D1_0_SDG %>%
filter(code %in% list_country) %>%
select(code, country)
D1_0_SDG_country_list <- D1_0_SDG_country_list %>%
select(code, country) %>%
distinct()Finally, we complete database to make sure there are not couples of (year, code) missing.
Here are the first few lines of the cleaned dataset on SDG achievement scores:
For this first dataset, we went from 4’140 observations for 120 variables to 3818 observationsfor 21 varibles.
As said, this is now our main dataset. All subsequent datasets will be merged with this dataset. Therefore, for all the following datasets, we will make sure that we only keep data for the same countries and years as in this dataset. We have a total of 166 countries and the years range from 2000 to 2022.
2.3.2 Dataset on Unemployment rate
In this dataset, the initial step involves importing the data. Next, we ensure that the names and codes of the countries are formatted in UTF-8, preventing any discrepancies due to mismatches in country names. Following this, we modify the column names and filter the data to include only the relevant countries and years, specifically the years 2000 to 2022, covering 166 countries from our primary dataset.
Code
D2_1_Unemployment_rate <- read.csv(here("scripts","data","UnemploymentRate.csv")) %>%
as.data.frame() %>%
mutate(
country = iconv(ref_area.label, to = "UTF-8", sub = "byte"),
country = countrycode(country, "country.name", "country.name"),
year = time,
`unemployment rate` = obs_value / 100,
age_category = classif1.label,
sex = sex.label
) %>%
select(-ref_area.label, -time, -obs_value, -classif1.label, -sex.label, -source.label, -obs_status.label, -indicator.label) %>%
merge(D1_0_SDG_country_list[, c("country", "code")], by = "country", all.x = TRUE) %>%
filter(year >= 2000 & year <= 2022,
!str_detect(sex, fixed("Male")) & !str_detect(sex, fixed("Female")),
code %in% D1_0_SDG_country_list$code,
age_category == "Age (Youth, adults): 15+") %>%
select(code, country, year, `unemployment rate`) %>%
distinct()Here are the first few lines of the cleaned dataset on Unemployment rate:
For this dataset, we went from 82’800 observations for 8 variables to 3812 observations for 5 varibles. ### Dataset on GDP military Expenditures
We have three different databases which contain information on each countries over the years. Each year represent one variable. We want to extract three variables for our analysis: GDP per capita, military expenditures in percentage of the GDP and military expenditures in percentage of government expenditures.
Code
GDPpercapita <-
read.csv(here("scripts","data","GDPpercapita.csv"), sep = ";")
MilitaryExpenditurePercentGDP <-
read.csv(here("scripts","data","MilitaryExpenditurePercentGDP.csv"), sep = ";")
MiliratyExpenditurePercentGovExp <-
read.csv(here("scripts","data","MiliratyExpenditurePercentGovExp.csv"), sep = ";")After importing the data, we fill in the missing country codes using the column Indicator.Name, because we realized after some manipulations, that some of the country codes were false, but the next column contained the right ones.
Code
fill_code <- function(data){
data <- data %>%
mutate(Country.Code = ifelse(!grepl("^[A-Z]{3}$", Country.Code), Indicator.Name, Country.Code))
}We create a set of functions that we will apply to each database. First, remove the variables that we don’t need, which are the years before 2000. Second, make sure that the values are numeric and rename the year variables (because they all had an “X” before year number). Third, transform the database from wide to long, in order to match the main database. Fourth, transform the year variable into an integer variable and rearrange and rename the columns to match the ones of the other databases. Then, we apply these transformations to the three databases.
Code
remove <- function(data){
years <- seq(1960, 1999)
removeyears <- paste("X", years, sep = "")
data <- data[, !(names(data) %in% c("Indicator.Name", "Indicator.Code", "X", removeyears))]
}
makenum <- function(data) {
for (i in 2000:2022) {
year <- paste("X", i, sep = "")
data[[year]] <- as.numeric(data[[year]])
}
return(data)
}
renameyear <- function(data) {
for (i in 2000:2022) {
varname <- paste("X", i, sep = "")
names(data)[names(data) == varname] <- gsub("X", "", varname)
}
return(data)
}
wide2long <- function(data) {
data <- pivot_longer(data,
cols = -c("Country.Name", "Country.Code"),
names_to = "year",
values_to = "data")
return(data)
}
yearint <- function(data) {
data$year <- as.integer(data$year)
return(data)
}
nameorder <- function(data) {
colnames(data) <- c("country", "code", "year", "data")
data <- data %>% select(c("code", "country", "year", "data"))
}
cleanwide2long <- function(data){
data <- fill_code(data)
data <- remove(data)
data <- makenum(data)
data <- renameyear(data)
data <- wide2long(data)
data <- yearint(data)
data <- nameorder(data)
}
GDPpercapita <- cleanwide2long(GDPpercapita)
MilitaryExpenditurePercentGDP <- cleanwide2long(MilitaryExpenditurePercentGDP)
MiliratyExpenditurePercentGovExp <- cleanwide2long(MiliratyExpenditurePercentGovExp)We rename the colums with the main information, standardize the country code and remove the countries that are not in our main database. We see that all the 166 countries are there.
Code
GDPpercapita <- GDPpercapita %>%
rename(GDPpercapita = data)
MilitaryExpenditurePercentGDP <- MilitaryExpenditurePercentGDP %>%
rename(MilitaryExpenditurePercentGDP = data)
MiliratyExpenditurePercentGovExp <- MiliratyExpenditurePercentGovExp %>%
rename(MiliratyExpenditurePercentGovExp = data)
GDPpercapita$code <- countrycode(
sourcevar = GDPpercapita$code,
origin = "iso3c",
destination = "iso3c",
)
MilitaryExpenditurePercentGDP$code <- countrycode(
sourcevar = MilitaryExpenditurePercentGDP$code,
origin = "iso3c",
destination = "iso3c",
)
MiliratyExpenditurePercentGovExp$code <- countrycode(
sourcevar = MiliratyExpenditurePercentGovExp$code,
origin = "iso3c",
destination = "iso3c",
)
GDPpercapita <- GDPpercapita %>% filter(code %in% list_country)
length(unique(GDPpercapita$code))
#> [1] 166
MilitaryExpenditurePercentGDP <- MilitaryExpenditurePercentGDP %>% filter(code %in% list_country)
length(unique(MilitaryExpenditurePercentGDP$code))
#> [1] 166
MiliratyExpenditurePercentGovExp <- MiliratyExpenditurePercentGovExp %>% filter(code %in% list_country)
length(unique(MiliratyExpenditurePercentGovExp$code))
#> [1] 166There were only 157 countries that were both in the main SDG dataset and in these 3 datasets, but we suspected that some of the missing countries were in the database but not rightly matched. Indeed, Bahamas was in the database but instead of the code “BHS” there was “The”, for “COD” it was “Dem. Rep.”, for “COG” it was “Rep”, etc. We remarked that the code is in another column of the initial database: “Indicator.Name”. We went back to the initial database and before cleaning it we put the right codes (as seen above) and after rerunning the code we see that we have all our 166 countries from the initial dataset.
Code
list_country_GDP <- c(unique(GDPpercapita$code))
(missing <- setdiff(list_country, list_country_GDP))
#> character(0)We run a first round of investigation of the missing values and find that we have 16.4% for MiliratyExpenditurePercentGovExp, 12.9% for MilitaryExpenditurePercentGDP and 1.31% for GDPpercapita.
Code
mean(is.na(MiliratyExpenditurePercentGovExp$MiliratyExpenditurePercentGovExp))
#> [1] 0.164
mean(is.na(MilitaryExpenditurePercentGDP$MilitaryExpenditurePercentGDP))
#> [1] 0.129
mean(is.na(GDPpercapita$GDPpercapita))
#> [1] 0.01312.3.2.1 GDP per capita
For GDPpercapita, only two countries (SOM and SSD) have a lot of missing values and in total 11 countries countries have missing values.
Code
GDPpercapita1 <- GDPpercapita %>%
group_by(code) %>%
summarize(NaGDP = mean(is.na(GDPpercapita))) %>%
filter(NaGDP != 0)
print(GDPpercapita1, n = 180)
#> # A tibble: 11 x 2
#> code NaGDP
#> <chr> <dbl>
#> 1 AFG 0.130
#> 2 BTN 0.0435
#> 3 CUB 0.0870
#> 4 LBN 0.0435
#> 5 SOM 0.565
#> 6 SSD 0.652
#> 7 STP 0.0435
#> 8 SYR 0.0870
#> 9 TKM 0.0870
#> 10 VEN 0.304
#> 11 YEM 0.130We plot the evolution of GDPpercapita avec the years for each country containing missing values and distinguish the percentage of missing values with colors.
Code
filtered_data_GDP <- GDPpercapita %>%
filter(code %in% GDPpercapita1$code) # countries with NAs
filtered_data_GDP <- filtered_data_GDP %>%
group_by(code) %>%
mutate(PercentageMissing = mean(is.na(GDPpercapita))) %>% # column % NAs
ungroup()
Evol_Missing_GDP <- ggplot(data = filtered_data_GDP) +
geom_point(aes(x = year, y = GDPpercapita,
color = cut(PercentageMissing,
breaks = c(0, 0.1, 0.2, 1),
labels = c("0-10%", "10-20%", "30-100%")))) +
labs(title = "Evolution of GDP per capita over time", x = "Year", y = "GDP per capita") +
scale_color_manual(values = c("0-10%" = "blue", "10-20%" = "green", "30-100%" = "black"),
labels = c("0-10%", "10-20%", "30-100%")) +
guides(color = guide_legend(title = "% missings")) +
facet_wrap(~ code, nrow = 4)
print(Evol_Missing_GDP)For the countries with less than 30% of missing values and a linear evolution in time, we fill the missing values using linear interpolation.
Code
list_code <- c("AFG", "BTN", "CUB", "STP", "TKM")
for (i in list_code) {
country_data <- GDPpercapita %>% filter(code == i)
interpolated_data <- na.interp(country_data$GDPpercapita)
GDPpercapita[GDPpercapita$code == i, "GDPpercapita"] <- interpolated_data
}2.3.2.2 Military expenditures in percentage of GDP
For MilitaryExpenditurePercentGDP, 12 countries have 100% of missing values. We further investigate and keep them for now, knowing that some of these coutries may also have many missing values in the other databases when wee merge everything and will be dropped later.
Code
MilitaryExpenditurePercentGDP1 <- MilitaryExpenditurePercentGDP %>%
group_by(code) %>%
summarize(NaMil1 = round(mean(is.na(MilitaryExpenditurePercentGDP)),3)) %>%
filter(NaMil1 != 0)
print(table(MilitaryExpenditurePercentGDP1$NaMil1))
#>
#> 0.043 0.087 0.13 0.174 0.217 0.261 0.304 0.348 0.391 0.522 0.565
#> 4 2 7 6 3 3 3 2 1 2 2
#> 0.739 0.783 1
#> 1 1 12We plot the evolution of MilitaryExpenditurePercentGDP along the years for each country containing missing values and distinguish the percentage of missing values with colors.
Code
filtered_data_Mil1 <- MilitaryExpenditurePercentGDP %>%
filter(code %in% MilitaryExpenditurePercentGDP1$code) # countries with NAs
filtered_data_Mil1 <- filtered_data_Mil1 %>%
group_by(code) %>%
mutate(PercentageMissing = mean(is.na(MilitaryExpenditurePercentGDP))) %>% # Column % NAs
ungroup()
Evol_Missing_Mil1 <- ggplot(data = filtered_data_Mil1) +
geom_line(aes(x = year, y = MilitaryExpenditurePercentGDP,
color = cut(PercentageMissing,
breaks = c(0, 0.1, 0.2, 0.3, 1),
labels = c("0-10%", "10-20%", "20-30%", "30-100%")))) +
labs(title = "Military expenditure in % of GDP over time", x = "Years from 2000 to 2022", y = "GDP per capita") +
scale_color_manual(values = c("0-10%" = "blue", "10-20%" = "green", "20-30%" = "red", "30-100%" = "black"),
labels = c("0-10%", "10-20%", "20-30%", "50-100%")) +
guides(color = guide_legend(title = "% missings")) +
facet_wrap(~ code, nrow = 6) +
theme(strip.text = element_text(size = 6)) +
scale_x_continuous(breaks = NULL) +
scale_y_continuous(breaks = NULL)
print(Evol_Missing_Mil1)For the countries with less than 30% of missing values and a linear evolution in time, we fill the missing values using linear interpolation.
Code
list_code <- c("AFG", "BDI", "BEN", "CAF", "CIV", "COD", "GAB", "GMB", "KAZ", "LBN", "LBR", "MNE", "MRT", "NER", "TKJ", "TTO", "ZMB")
for (i in list_code) {
country_data <- MilitaryExpenditurePercentGDP %>% filter(code == i)
interpolated_data <- na.interp(country_data$MilitaryExpenditurePercentGDP)
MilitaryExpenditurePercentGDP[MilitaryExpenditurePercentGDP$code == i, "MilitaryExpenditurePercentGDP"] <- interpolated_data
}2.3.2.3 Military expenditures in percentage of governement expenditures
For MilitaryExpenditurePercentGovExp, 17 countries have 100% of missing values. We further investigate and keep them for now, knowing that some of these coutries may also have many missing values in the other databases when wee merge everything and will be dropped later.
Code
MiliratyExpenditurePercentGovExp1 <- MiliratyExpenditurePercentGovExp %>%
group_by(code) %>%
summarize(NaMil2 = round(mean(is.na(MiliratyExpenditurePercentGovExp)),3)) %>%
filter(NaMil2 != 0)
print(table(MiliratyExpenditurePercentGovExp1$NaMil2))
#>
#> 0.043 0.087 0.13 0.174 0.217 0.261 0.304 0.348 0.391 0.478 0.522
#> 5 3 5 4 5 4 4 2 1 1 2
#> 0.565 0.609 0.783 1
#> 2 1 1 17We plot the evolution of MilitaryExpenditurePercentGovExp along the years for each country containing missing values and distinguish the percentage of missing values with colors.
Code
filtered_data_Mil2 <- MiliratyExpenditurePercentGovExp %>%
filter(code %in% MiliratyExpenditurePercentGovExp1$code) # Countries with NAs
filtered_data_Mil2 <- filtered_data_Mil2 %>%
group_by(code) %>%
mutate(PercentageMissing = mean(is.na(MiliratyExpenditurePercentGovExp))) %>% # Column % NAs
ungroup()
Evol_Missing_Mil2 <- ggplot(data = filtered_data_Mil2) +
geom_line(aes(x = year, y = MiliratyExpenditurePercentGovExp,
color = cut(PercentageMissing,
breaks = c(0, 0.1, 0.2, 0.3, 1),
labels = c("0-10%", "10-20%", "20-30%", "30-100%")))) +
labs(title = "Military expenditure in % of government expenditures over time", x = "Year from 2000 to 2022", y = "GDP per capita") +
scale_color_manual(values = c("0-10%" = "blue", "10-20%" = "green", "20-30%" = "red", "30-100%" = "black"),
labels = c("0-10%", "10-20%", "20-30%", "50-100%")) +
guides(color = guide_legend(title = "% missings")) +
facet_wrap(~ code, nrow = 7) +
theme(strip.text = element_text(size = 6)) +
scale_x_continuous(breaks = NULL) +
scale_y_continuous(breaks = NULL)
print(Evol_Missing_Mil2)For the countries with less than 30% of missing values and a linear evolution in time, we fill the missing values using linear interpolation.
Code
list_code <- c("AFG", "ARM", "BEN", "BIH", "BLR", "COG", "ECU", "GAB", "GMB", "KAZ", "LBN", "LBR", "MNE", "MWI", "NER", "TTO", "UKR", "ZMB")
for (i in list_code) {
country_data <- MiliratyExpenditurePercentGovExp %>% filter(code == i)
interpolated_data <- na.interp(country_data$MiliratyExpenditurePercentGovExp)
MiliratyExpenditurePercentGovExp[MiliratyExpenditurePercentGovExp$code == i, "MiliratyExpenditurePercentGovExp"] <- interpolated_data
}We now look again at the percentage of missing values for the trhee databases: 14.49% for MiliratyExpenditurePercentGovExp, 11.6% for MilitaryExpenditurePercentGDP and 1.07% for GDPpercapita
Code
mean(is.na(MiliratyExpenditurePercentGovExp$MiliratyExpenditurePercentGovExp))
#> [1] 0.149
mean(is.na(MilitaryExpenditurePercentGDP$MilitaryExpenditurePercentGDP))
#> [1] 0.116
mean(is.na(GDPpercapita$GDPpercapita))
#> [1] 0.0107
D3_1_GDP_per_capita <- GDPpercapita
D3_2_Military_Expenditure_Percent_GDP <- MilitaryExpenditurePercentGDP
D3_3_Miliraty_Expenditure_Percent_Gov_Exp <- MiliratyExpenditurePercentGovExpHere are the first few lines of the cleaned dataset of GDP per capita:
For this dataset, we went from ??? observations for 68 variables to 3818 observations for 4 varibles.
Here are the first few lines of the cleaned dataset of military expenditures in percentage of GDP:
For this dataset, we went from ??? observations for 68 variables to 3818 observations for 4 varibles.
Here are the first few lines of the cleaned dataset of military expenditures in percentage of government expenditures:
2.3.3 Dataset on internet usage
To prepare the dataset on internet usage in the world to be merge with the other data, we first, import the data. Then, we keep only the year that we are interested in (2000 to 2022). We also rename the column and keep only the country that match the list of the countries in the main dataset on the SDG.
Code
D4_0_Internet_usage <- read.csv(here("scripts", "data", "InternetUsage.csv")) %>%
filter(Year >= 2000, Year <= 2022) %>%
rename(
code = Code,
country = Entity,
year = Year,
internet_usage = Individuals.using.the.Internet....of.population.
) %>%
mutate(internet_usage = internet_usage / 100) %>%
filter(code %in% list_country)Here are the first few lines of the cleaned dataset of internet usage:
For this dataset, we went from 6’570 observations for 4 variables to 3433 observations for 4 varibles.
2.3.4 Dataset on human freedom index
After importing the data from the CATO Institute website, we noticed that even if the file was called “Human Freedom Index 2022”, the available observations were only going from 2000 up to 2020. We have decided first to modify it in order to match our other datasets, by renaming/encoding/standardizing the columns containing the country names.
Code
data <- read.csv(here("scripts", "data", "human-freedom-index-2022.csv"))
#data in tibble
datatibble <- tibble(data)
# Rename the column countries into country to match the other datbases
names(datatibble)[names(datatibble) == "countries"] <- "country"
# Make sure the encoding of the country names are UTF-8
datatibble$country <- iconv(datatibble$country, to = "UTF-8", sub = "byte")
# standardize country names
datatibble <- datatibble %>%
mutate(country = countrycode(country, "country.name", "country.name"))Once done, we could verify which countries were or were not present between these observations and our main SDG dataset. We have decided to keep the ones that were matching between the two datasets.
Code
# Merge by country name
datatibble <- datatibble %>%
left_join(D1_0_SDG_country_list, by = "country")
datatibble <- datatibble %>% filter(code %in% list_country)
(length(unique(datatibble$code)))
#> [1] 159
# See which ones are missing
list_country_free <- c(unique(datatibble$code))
(missing <- setdiff(list_country, list_country_free))
#> [1] "AFG" "CUB" "MDV" "STP" "SSD" "TKM" "UZB"
# Turkey was missing but present in the initial database (it was a problem when stadardizing the country names of D1_0SDG_country_list that we corrected) and the other missing countries are:"AFG" "CUB" "MDV" "STP" "SSD" "TKM" "UZB"
D5_0_Human_freedom_index <- datatibbleThen, we noticed that there were a lot of columns that were not important for us, as we had 141 variables taken into account. So we have decided to keep the ones that refers to the countries informations (such as code, year, ..) and their human freedom scores per category (pf for personnal freedom, ef for economical freedom).
Code
# erasing useless columns to keep only the general ones.
D5_0_Human_freedom_index <- select(D5_0_Human_freedom_index, year, country, region, hf_score, pf_rol, pf_ss, pf_movement, pf_religion, pf_assembly, pf_expression, pf_identity, pf_score, ef_government, ef_legal, ef_money, ef_trade, ef_regulation, ef_score, code)
D5_0_Human_freedom_index <- D5_0_Human_freedom_index %>%
rename(
pf_law = names(D5_0_Human_freedom_index)[5], # Renames the 5th column to "pf_law"
pf_security = names(D5_0_Human_freedom_index)[6] # Renames the 6th column to "pf_security"
)After renaming the columns pf_law/security for comprehension purpose, we have investigated how are distributed the NA values among the countries and the variables. After having found the percentages of missing values per country and variable, heatmaps revealed themself to be a great tool for visualizing datas.
Code
na_percentage_by_country <- D5_0_Human_freedom_index %>%
group_by(country) %>%
select(-code) %>%
summarise(across(everything(), ~mean(is.na(.))*100))
na_long <- na_percentage_by_country %>%
pivot_longer(
cols = -country,
names_to = "Variable",
values_to = "NA_Percentage"
)
overall_na_percentage <- na_long %>%
group_by(Variable) %>%
summarize(Avg_NA_Percentage = mean(NA_Percentage, na.rm = TRUE)) %>%
arrange(desc(Avg_NA_Percentage))
print(overall_na_percentage)
#> # A tibble: 17 x 2
#> Variable Avg_NA_Percentage
#> <chr> <dbl>
#> 1 ef_money 10.4
#> 2 ef_trade 10.4
#> 3 ef_score 10.4
#> 4 hf_score 10.4
#> 5 pf_score 10.4
#> 6 ef_regulation 9.49
#> 7 ef_government 2.91
#> 8 ef_legal 1.71
#> 9 pf_law 1.44
#> 10 pf_identity 0.299
#> 11 pf_assembly 0
#> 12 pf_expression 0
#> 13 pf_movement 0
#> 14 pf_religion 0
#> 15 pf_security 0
#> 16 region 0
#> 17 year 0Then, for having a better understanding of the situation, we ordered the countries having at least 1 variable containing 50% and more of missing values
Code
na_long <- na_long %>%
group_by(country) %>%
mutate(Count_NA_50_100 = sum(NA_Percentage >= 50 & NA_Percentage <= 100, na.rm = TRUE)) %>%
ungroup() %>%
arrange(desc(Count_NA_50_100))
heatmap_ordered_all <- ggplot(na_long, aes(x = reorder(country, -Count_NA_50_100), y = Variable)) +
geom_tile(aes(fill = NA_Percentage), colour = "white") +
scale_fill_gradient(low = "white", high = "red") +
theme_minimal() +
labs(
title = "Heatmap of NA Percentages per Country and Variable",
x = "Countries",
y = "Variables",
fill = "NA Percentage"
) +
theme(
axis.text.x = element_blank(), # Hide x-axis labels
axis.text.y = element_text(size = 9)
)
print(heatmap_ordered_all)We notice that only some countries look to contain at least 50% of missing values and in addition that most of the missing values are concerning the EF variables (Economic Freedom). Now, we tried to produce another heatmap only containing the ordered countries, and also counting for each one of these country the number of variables with at least 50% of NAs.
Code
na_long_filtered <- na_long %>%
group_by(country) %>%
mutate(Count_NA_50_100 = sum(NA_Percentage >= 50 & NA_Percentage <= 100, na.rm = TRUE)) %>%
filter(Count_NA_50_100 > 0) %>%
ungroup() %>%
arrange(desc(Count_NA_50_100))
heatmap_ordered_filtered <- ggplot(na_long_filtered, aes(x = reorder(country, -Count_NA_50_100), y = Variable)) +
geom_tile(aes(fill = NA_Percentage), colour = "white") +
scale_fill_gradient(low = "white", high = "red") +
theme_minimal() +
labs(
title = "Heatmap of NA Percentages per Country and Variable",
x = "Countries",
y = "Variables",
fill = "NA Percentage"
) +
theme(
axis.text.x = element_text(angle = 90, hjust = 1),
axis.text.y = element_text(size = 7)
)
print(heatmap_ordered_filtered)
country_na_count <- na_long %>%
filter(NA_Percentage >= 50) %>%
group_by(country) %>%
summarise(Count_NA_50_100 = n()) %>%
arrange(desc(Count_NA_50_100))
print(country_na_count)
#> # A tibble: 13 x 2
#> country Count_NA_50_100
#> <chr> <int>
#> 1 Comoros 8
#> 2 Djibouti 8
#> 3 Somalia 8
#> 4 Belarus 6
#> 5 Guinea 6
#> 6 Iraq 6
#> 7 Laos 6
#> 8 Sudan 6
#> 9 Bhutan 5
#> 10 Liberia 5
#> 11 Bahamas 1
#> 12 Belize 1
#> 13 Brunei 1We conclude here that 13 countries were concerned by our selection of 50% and more of missing values. When discussing between us, we came to the conclusion that among these 13 countries, a great part of them were not going to be selected because they had a lot of missing values in our main dataset too. Therefore, we have decided to merge this data with the other datasets and finish the cleaning after.
Here are the first few lines of the partialy cleaned dataset on Human Freedom Index scores:
For this dataset, we went from 3’465 observations for 141 variables to lengh(D5_0_Human_freedom_index$code) observations for 4 varibles.
2.3.5 Dataset on Disasters
For this dataset concerning the Disasters we imported the data from Kaggle as we couldn’t find the original dataset that is private coming from the EOSDIS SYSTEM, an interactive interface for browsing full-resolution, global, daily satellite images from NASA. Once we made sure that our file called “Disasters” was convert into a data frame, we selected some specific columns that we where interested in.
Code
Disasters <- read.csv(here("scripts","data","Disasters.csv"))
Disasters <- as.data.frame(Disasters)
Disasters <- Disasters %>%
select(Year, Country, ISO, Location, Continent, Disaster.Subgroup, Disaster.Type, Total.Deaths, No.Injured, No.Affected, No.Homeless, Total.Affected, Total.Damages...000.US..)Because we knew that our file showed all the disasters in each country over the years (between 1970-2021) and that we wanted to focus on a specific period, we filtered our data to show the years between 2000 and 2022. Then we rearranged our data, changing the data types of all the columns and their names in order to match our other datasets.
Code
# Rearrange the columns, changed the type of data, renamed the columns
Rearanged_Disasters <- Disasters %>%
filter(Year >= 2000 & Year <= 2022) %>%
mutate(
code = as.character(ISO),
country = as.character(Country),
year = as.integer(Year),
continent = as.character(Continent),
disaster.subgroup = as.character(Disaster.Subgroup),
disaster.type = as.character(Disaster.Type),
location = as.character(Location),
total.deaths = as.numeric(Total.Deaths),
no.injured = as.numeric(No.Injured),
no.affected = as.numeric(No.Affected),
no.homeless = as.numeric(No.Homeless),
total.affected = as.numeric(Total.Affected),
total.damages = as.numeric(Total.Damages...000.US..)
)We then grouped the data by “year”, “code”, “country” and “continent” and summarize the data. Here you can see that we re-selected specific columns as we saw that our first pre-selection was still too wide and some variables as the disaster.subgroup and disaster.type weren’t pertinent.We arranged the columns based on “code,” “country,” “year,” and “continent” to match the other datasets.
Code
Disasters <- Rearanged_Disasters %>%
group_by(year,code, country, continent) %>%
summarize(
total_deaths = sum(total.deaths, na.rm = TRUE),
no_injured = sum(no.injured, na.rm = TRUE),
no_affected = sum(no.affected, na.rm = TRUE),
no_homeless = sum(no.homeless, na.rm = TRUE),
total_affected = sum(total.affected, na.rm = TRUE),
total_damages = sum(total.damages, na.rm = TRUE)
)
D6_0_Disasters <- Disasters %>%
select(code, country, year, continent, total_deaths, no_injured, no_affected, no_homeless, total_affected, total_damages) %>%
arrange(code, country, year, continent)Finally we filtered our disasters data to keep only the countries that are present in our main dataset. We analysed the missing countries and identified three countries (BHR, BRN, MLT) that are unexpectedly missing.
Code
D6_0_Disasters <- D6_0_Disasters %>% filter(code %in% list_country)
length(unique(D6_0_Disasters$code))
#> [1] 163
# Here we see which countries are missing
list_country_disasters <- c(unique(D6_0_Disasters$code))
(missing <- c(missing,setdiff(list_country, list_country_disasters)))
#> [1] "AFG" "CUB" "MDV" "STP" "SSD" "TKM" "UZB" "BHR" "BRN" "MLT"Here are the first few lines of the cleaned dataset on Disasters:
2.3.6 Dataset on COVID
This dataset contains information on the COVID19 pandemic between 2020 and 2022. The observation are by year, month, day. After importing the database, we transform the date in format YYYY-MM-DD in order to only keep the year.
Code
COVID <- read.csv(here("scripts","data","COVID.csv"))
COVID <- COVID[,c("iso_code", "location", "date", "new_cases_per_million", "new_deaths_per_million", "stringency_index")]
COVID$date <- as.integer(year(COVID$date))We perform a first round of investigation of the missing values before aggregating the values by year. We begin with the variables “cases per million” and “deaths per million”: seeing that for each country, we have either only missing values, either a very low percentage of missing values (~1%), we can compute the sum over each year and ignore the missing values without altering the data. Indeed, where al the values are missing, the computation will return a NA. We then look at the “stringency” variable and we have 3 scenarios:
~20% missings: we ignore missing values when computing the mean to have an idea of stringency each year (because we compute the mean stringency over the year, if some days are missing, it is not a problem, it can not evoluate that fast).
all are missing : we can ignore the missing values when computing the mean, because it will still return a missing value
almost all are missing: here the mean doesn’t make sense -> we will replace the values by NAs to be coherent. The countries with this issues are: ERI, GUM, PRI and VIR. We verify if they are in our main dataset and since none of these countries are, we can ignore the issue, the lines will be remove later anyway.
We aggregate the observations of all days of a year in one observation per country using the mean.
Code
COVID1 <- COVID %>%
group_by(iso_code) %>%
summarize(NaCOVID = round(mean(is.na(new_cases_per_million)),3)) %>%
filter(NaCOVID != 0)
print(table(COVID1$NaCOVID))
#>
#> 0.001 0.002 0.003 0.004 0.012 0.109 1
#> 33 6 2 5 1 1 9
COVID2 <- COVID %>%
group_by(iso_code) %>%
summarize(NaCOVID = round(mean(is.na(new_deaths_per_million)),3)) %>%
filter(NaCOVID != 0)
print(table(COVID2$NaCOVID))
#>
#> 0.001 0.002 0.004 0.11 1
#> 32 1 2 1 9
COVID3 <- COVID %>%
group_by(iso_code) %>%
summarize(NaCOVID = round(mean(is.na(stringency_index)), 3)) %>%
filter(NaCOVID != 0)
print(table(COVID3$NaCOVID))
#>
#> 0.13 0.186 0.198 0.21 0.986 1
#> 1 1 1 178 4 70
issue_list <- c("ERI", "GUM", "PRI", "VIR")
is.element(issue_list, list_country)
#> [1] FALSE FALSE FALSE FALSE
COVID <- COVID %>%
group_by(location, date) %>%
mutate(
cases_per_million = sum(new_cases_per_million, na.rm = TRUE),
deaths_per_million = sum(new_deaths_per_million, na.rm = TRUE),
stringency = mean(stringency_index, na.rm = TRUE)
)%>%
ungroup()Now that all the variables of interest are aggregated by year, we remove all the variables that we don’t need and rename all the remaining variables to match the main dataset.
Code
COVID <- COVID %>%
group_by(location, date) %>%
distinct(date, .keep_all = TRUE) %>%
ungroup()
COVID <- COVID %>% select(-c(new_cases_per_million, new_deaths_per_million, stringency_index))
colnames(COVID) <- c("code", "country", "year", "cases_per_million", "deaths_per_million", "stringency")We remove the years that exceed 2022, we make sure that the country codes are all iso codes with 3 letters (we observe that sometimes they are preceded by “OWID_”) and we standardize the country codes.
Code
COVID <- COVID[COVID$year <= 2022, ]
COVID$code <- gsub("OWID_", "", COVID$code)
COVID$code <- countrycode(
sourcevar = COVID$code,
origin = "iso3c",
destination = "iso3c"
)We remove the observations of countries that aren’t in our main dataset on SDGs and find that all the 166 countries that we have in the main SDG dataset are also in this one.
Code
COVID <- COVID %>% filter(code %in% list_country)
length(unique(COVID$code))
#> [1] 166We perform a second round of missing values investigation and find out that there are no missing values except for the stringency, where there are 4.19%. Either all values are missing for one country, or 50% are missing, so these 7 countries won’t be included when analyzing the effect of stringency on the SDG scores.
Code
mean(is.na(COVID$cases_per_million))
#> [1] 0
mean(is.na(COVID$deaths_per_million))
#> [1] 0
mean(is.na(COVID$stringency))
#> [1] 0.0419
COVID4 <- COVID %>%
group_by(code) %>%
summarize(NaCOVID = mean(is.na(stringency))) %>%
filter(NaCOVID != 0)
print(COVID4, n = 300)
#> # A tibble: 7 x 2
#> code NaCOVID
#> <chr> <dbl>
#> 1 ARM 1
#> 2 COM 1
#> 3 MDV 1
#> 4 MKD 1
#> 5 MNE 1
#> 6 NAM 0.5
#> 7 STP 1
D7_0_COVID <- COVIDHere are the first few lines of the cleaned dataset on COVID19:
2.3.7 Dataset on Conflicts
For our conflicts dataset, we imported the data from “The World Banck” data catalog. Once we made sure that our file called “Disasters” was convert into a data frame, we selected some specific columns that we where interested in.
Code
Conflicts <- read.csv(here("scripts","data","Conflicts.csv"))
Conflicts <- as.data.frame(Conflicts)
Conflicts <- Conflicts %>%
select(year, country, ongoing, gwsum_bestdeaths, pop_affected, peaceyearshigh, area_affected, maxintensity, maxcumulativeintensity)Our file showed all the Conflicts and consequences per country over the years (between 2000-2016). We couldn’t find a better and more complete dataset, As we consider conflicts as events, we will only take into account results between 2000 and 2016. Then we rearranged our data, changing the data types of all the columns and their names in order to match our other datasets. We grouped the data by ” year”, “country”, re-selected some variables and summarize the data.
Code
Rearanged_Conflicts <- Conflicts %>%
filter(year >= 2000 & year <= 2022)%>%
mutate(
ongoing = as.integer(ongoing),
country = as.character(country),
year = as.integer(year),
gwsum_bestdeaths = as.numeric(gwsum_bestdeaths),
pop_affected = as.numeric(pop_affected),
area_affected = as.numeric(area_affected),
maxintensity = as.numeric(maxintensity),
)
# Group the data by "year", "country" and summarize the data
Conflicts <- Rearanged_Conflicts %>%
group_by(year, country) %>%
summarize(
ongoing = sum (ongoing, na.rm = TRUE),
sum_deaths = sum(gwsum_bestdeaths, na.rm = TRUE),
pop_affected = sum(pop_affected, na.rm = TRUE),
area_affected = sum(area_affected, na.rm = TRUE),
maxintensity = sum(maxintensity, na.rm = TRUE),
)After we Selected specific columns from the summarized data and arrange the data by our specified columns. To make our dataset compatible with the main one and let the merging face succeed, we dd some adjustment concerning the country names’ to ensure the compatibility. Then we standardize and merge by country names to finally rearrange the data to retain only the countries present in our main dataset. Note that in the end we can see that only one country is missing that wasn’t in the initial conflicts database: BLR
Code
conflicts <- Conflicts %>%
select(country, year, ongoing, sum_deaths, pop_affected, area_affected, maxintensity) %>%
arrange(country, year)
conflicts$country <- iconv(conflicts$country, to = "UTF-8", sub = "byte")
conflicts <- conflicts %>%
mutate(country = countrycode(country, "country.name", "country.name"))
conflicts <- conflicts %>%
left_join(D1_0_SDG_country_list, by = "country")
conflicts <- conflicts %>%
select(code, country, year, ongoing, sum_deaths, pop_affected, area_affected, maxintensity) %>%
arrange(code, country, year)
D8_0_Conflicts <- conflicts %>% filter(code %in% list_country)
(length(unique(conflicts$code)))
#> [1] 166
# See which countries are missing
list_country_conflicts <- c(unique(conflicts$code))
(missing <- c(missing, setdiff(list_country, list_country_conflicts)))
#> [1] "AFG" "CUB" "MDV" "STP" "SSD" "TKM" "UZB" "BHR" "BRN" "MLT"
#> [11] "BLR"Here are the first few lines of the cleaned dataset on Conflicts:
2.3.8 Merge data
By merging our eight pre-cleaned datasets, we create a final database.
Code
D2_1_Unemployment_rate$country <- NULL
merge_1_2 <- D1_0_SDG |> left_join(D2_1_Unemployment_rate, join_by(code, year))
D3_1_GDP_per_capita$country <- NULL
merge_12_3 <- merge_1_2 |> left_join(D3_1_GDP_per_capita, join_by(code, year))
D3_2_Military_Expenditure_Percent_GDP$country <- NULL
merge_12_3 <- merge_12_3 |> left_join(D3_2_Military_Expenditure_Percent_GDP, join_by(code, year))
D3_3_Miliraty_Expenditure_Percent_Gov_Exp$country <- NULL
merge_12_3 <- merge_12_3 |> left_join(D3_3_Miliraty_Expenditure_Percent_Gov_Exp, join_by(code, year))
D4_0_Internet_usage$country <- NULL
merge_123_4 <- merge_12_3 |> left_join(D4_0_Internet_usage, join_by(code, year))
D5_0_Human_freedom_index$country <- NULL
merge_1234_5 <- merge_123_4 |> left_join(D5_0_Human_freedom_index, join_by(code, year))
D6_0_Disasters$country <- NULL
merge_12345_6 <- merge_1234_5 |> left_join(D6_0_Disasters, join_by(code, year))
D7_0_COVID$country <- NULL
D7_0_COVID <- D7_0_COVID |> distinct(code, year, .keep_all = TRUE)
merge_123456_7 <- merge_12345_6 |> left_join(D7_0_COVID, join_by(code, year))
D8_0_Conflicts$country <- NULL
all_Merge <- merge_123456_7 |> left_join(D8_0_Conflicts, join_by(code, year))
all_Merge <- all_Merge %>% filter(!code %in% missing)2.3.9 Cleaning of the final database
We replace the NAs of the COVID columns by 0 (because we don’t have real missing, only introduced by merging for the years before COVID).
Code
all_Merge <- all_Merge %>%
mutate(
cases_per_million = ifelse(is.na(cases_per_million), 0, cases_per_million),
deaths_per_million = ifelse(is.na(deaths_per_million), 0, deaths_per_million),
stringency = ifelse(is.na(stringency), 0, stringency)
)Since we took the information on the continent and region from databases that are not the main one, we complete these inforamtion for the whole final dataset.
Code
all_Merge <- all_Merge %>%
group_by(country) %>%
mutate(continent = ifelse(is.na(continent), first(na.omit(continent)), continent)) %>%
ungroup()
all_Merge <- all_Merge %>%
group_by(country) %>%
mutate(region = ifelse(is.na(region), first(na.omit(region)), region)) %>%
ungroup()We order the database, beginning by the information on the country, the year, the continent and the region.
Code
all_Merge <- all_Merge %>%
select(code, year, country, continent, region, everything())
write.csv(all_Merge, file = here("scripts","data","all_Merge.csv"))Here are the first few lines of the final dataset:
Final structure of our merged database: each country of the 166 countries from D1_1_SDG are observed each year from 2000 to 2022, thus each row has a key composed of (code, year) that uniquely identifies an observation. The other columns are the variables listed above. Due to some countries having a lot of missing information we will have to eliminate some of them, but we will still have more than 2000 rows in our database.
2.3.10 Treatment of missing values
We load our final database and we vizualize the missing values.
Code
all_Merge <- read.csv(here("scripts","data","all_Merge.csv"))
all_Merge <- all_Merge %>% select(-c(X))
# Create a dataframe with the goals without NAs summarize in one column to simplify the visualization
goal_vars <- all_Merge %>%
select(starts_with("goal")) %>%
filter_all(all_vars(!is.na(.))) %>%
colnames()
to_plot_missing <- all_Merge %>%
mutate(Goals_without_NAs = rowSums(!is.na(select(., all_of(goal_vars))))) %>%
select(-c(goal2, goal3, goal4, goal5, goal6, goal7, goal8, goal9, goal11, goal12, goal13, goal15, goal16, goal17))
vis_dat(to_plot_missing, warn_large_data = FALSE) + scale_fill_brewer(palette = "Paired") +
theme(
axis.text.x = element_text(angle = 90, size = 6),
legend.text = element_text(size = 8), # Adjust the size of legend text
legend.title = element_text(size = 10)
)We subset our database according to the data that we will need in order to answer the different questions. This will help us dealing with the missing values.
For question 1, we only keep the years until 2020, because most of the explanatory variables that we want to use (those coming from the human freedom index) only have values until 2020.
Code
data_question1 <- all_Merge %>% filter(year<=2020) %>% select(-c(total_deaths, no_injured, no_affected, no_homeless, total_affected, total_damages, cases_per_million, deaths_per_million, stringency, ongoing, sum_deaths, pop_affected, area_affected, maxintensity))For question 2 and 4, we use the main data from the SDG database.
Code
data_question24 <- all_Merge %>% select(c(code, year, country, continent, region, overallscore, goal1, goal2, goal3, goal4, goal5, goal6, goal7, goal8, goal9, goal10, goal11, goal12, goal13, goal15, goal16, goal17))For question 3, we create 3 distinct databases according to the different type of event that we wwill analyse: disasters, COVID19 and conflicts. For the disasters, we only keep the years until 2021, because after this date, we don’t have data. For the conflicts, we only keep the years until 2016, because after this date, we don’t have data.
Code
# Disasters
data_question3_1 <- all_Merge %>% filter(year<=2021) %>% select(c(code, year, country, continent, region, overallscore, goal1, goal2, goal3, goal4, goal5, goal6, goal7, goal8, goal9, goal10, goal11, goal12, goal13, goal15, goal16, goal7, total_deaths, no_injured, no_affected, no_homeless, total_affected, total_damages))
# COVID
data_question3_2 <- all_Merge %>% select(c(code, year, country, continent, region, overallscore, goal1, goal2, goal3, goal4, goal5, goal6, goal7, goal8, goal9, goal10, goal11, goal12, goal13, goal15, goal16, goal7, cases_per_million, deaths_per_million, stringency))
# Conflicts
data_question3_3 <- all_Merge %>% filter(year<=2016) %>% select(c(code, year, country, continent, region, overallscore, goal1, goal2, goal3, goal4, goal5, goal6, goal7, goal8, goal9, goal10, goal11, goal12, goal13, goal15, goal16, goal7, ongoing, sum_deaths, pop_affected, area_affected, maxintensity))2.3.10.1 Data for question 1
We begin by visualizing the missing values. To have a less messy graph we group all the goals wihtout NAs into one single variable.
Code
# Create a dataframe with the goals without NAs summarize in one column to simplify the visualization
variable_names <- names(data_question1)
missing_percentages <- sapply(data_question1, function(col) mean(is.na(col)) * 100)
missing_data_summary <- data.frame(
Variable = variable_names,
Missing_Percentage = missing_percentages
)
missing_data_summary <- missing_data_summary %>%
mutate(VariableGroup = ifelse(startsWith(Variable, "goal") & Missing_Percentage == 0, "Goals without NAs", as.character(Variable)))
ggplot(data = missing_data_summary, aes(x = reorder(VariableGroup, Missing_Percentage), y = Missing_Percentage, fill = Missing_Percentage)) +
geom_bar(stat = "identity") +
geom_text(aes(label = ifelse(Missing_Percentage > 1, sprintf("%.1f%%", Missing_Percentage), ""),
y = Missing_Percentage),
position = position_stack(vjust = 1), # Adjust vertical position
color = "white", # Text color
size = 2, # Text size
hjust = 1.05) +
labs(title = "Percentage of Missing Values by Variable",
x = "Variable",
y = "Missing Percentage") +
theme_minimal() +
theme(axis.text.y = element_text(hjust = 1, size=6 ),
legend.text = element_text(size = 8),
legend.title = element_text(size = 10)) +
labs(fill = "% NAs") +
coord_flip()We create a column with the number of missing values by country over all the variables, except goal 1 and goal 10 that we already discussed. We decide to remove the countries that have more than 50 missing values.
Code
see_missing1_1 <- data_question1 %>%
group_by(code) %>%
summarise(across(-c(year, country, continent, region, population, overallscore, goal1, goal2, goal3, goal4, goal5, goal6, goal7, goal8, goal9, goal10, goal11, goal12, goal13, goal15, goal16, goal17),
~ sum(is.na(.))) %>%
mutate(num_missing = rowSums(across(everything()))) %>%
filter(num_missing > 50))
data_question1 <- data_question1 %>% filter(!code %in% see_missing1_1$code)
list_country_deleted <- c(unique(see_missing1_1$code))Here is the graph that allows us to visualize the countries that have missing values, how many and for which variables, when there are more than 50 NAs in total.
Code
ggplot(see_missing1_1, aes(x = num_missing , y = reorder(code, num_missing), fill = num_missing)) +
geom_bar(stat = "identity") +
scale_fill_gradient(low = "lightgreen", high = "darkgreen") +
theme_minimal() +
theme(axis.text.y = element_text(hjust = 1, size=8 ),
legend.text = element_text(size = 8),
legend.title = element_text(size = 10)) +
labs(title = "Number of missing values per country containing at least 50 NAs", x = "Number of Missing Values", y = "Countries")Now, looking at the remaining countries that have missing values and there number accross all variables, we decide to remove MilitaryExpenditurePercentGovExp, because it has too many missing values and it contains similar information to MilitaryExpenditurePercentGDP.
Code
see_missing1_2 <- data_question1 %>%
group_by(code) %>%
summarise(across(-c(year, country, continent, region, population, overallscore, goal1, goal2, goal3, goal4, goal5, goal6, goal7, goal8, goal9, goal10, goal11, goal12, goal13, goal15, goal16, goal17),
~ sum(is.na(.))) %>%
mutate(num_missing = rowSums(across(everything()))) %>%
filter(num_missing > 0))
data_question1 <- data_question1 %>% select(-MiliratyExpenditurePercentGovExp)Here is the ggplot that helps us to visualize the countries that have missing values after removing the countries with more than 50 NAs.
Code
ggplot(see_missing1_2, aes(x = num_missing , y = reorder(code, num_missing), fill = num_missing)) +
geom_bar(stat = "identity", width = 0.5) +
scale_fill_gradient(low = "lightgreen", high = "darkgreen") +
theme_minimal() +
theme(axis.text.y = element_text(hjust = 1, size= 6 ),
legend.text = element_text(size = 8),
legend.title = element_text(size = 10)) +
labs(title = "Number of missing values per country", x = "Number of Missing Values", y = "Countries")2.3.10.1.1 GDP per capita
Only Venezuela has missing values that we can not fill, so we delete the country.
Code
question1_missing_GDP <- data_question1 %>%
group_by(code) %>%
summarize(NaGDPpercapita = mean(is.na(GDPpercapita)))%>%
filter(NaGDPpercapita != 0)
data_question1 <- data_question1 %>% filter(code!="VEN")
list_country_deleted <- c(list_country_deleted, "VEN")2.3.10.1.2 Military expenditure in % of GDP
To begin with, we delete the countries with more than 30% missing values.
Code
question1_missing_Military <- data_question1 %>%
group_by(code) %>%
summarize(NaMilitary = mean(is.na(MilitaryExpenditurePercentGDP)))%>%
filter(NaMilitary != 0)
data_question1 <- data_question1 %>% filter(code!="BRB" & code!="CRI" & code!="HTI" & code!="ISL" & code!="PAN" & code!="SYR")
list_country_deleted <- c(list_country_deleted, "BRB", "CRI", "HTI", "ISL", "PAN", "SYR") Then, we look at the distribution of the variable per region. Seeing that all are skewed distributions, we decide to replace the missing values, where there are less than 30% missing using the median by region.
Code
question1_missing_Military <- data_question1 %>%
group_by(code) %>%
mutate(PercentageMissing = mean(is.na(MilitaryExpenditurePercentGDP))) %>% # Column % NAs
ungroup() %>%
group_by(region) %>%
filter(sum(PercentageMissing, na.rm = TRUE) > 0)
Freq_Missing_Military <- ggplot(data = question1_missing_Military) +
geom_histogram(aes(x = MilitaryExpenditurePercentGDP,
fill = cut(PercentageMissing,
breaks = c(0, 0.1, 0.2, 0.3, 1),
labels = c("0-10%", "10-20%", "20-30%", "30-100%"))),
bins = 30) +
labs(title = "Distribution of Military expenditures in % of GDP", x = "Military expenditures in % of GDP", y = "Frequency") +
scale_fill_manual(values = c("0-10%" = "blue", "10-20%" = "green", "20-30%"="red","30-100%" = "black"), labels = c("0-10%", "10-20%", "20-30%","30-100%")) +
guides(fill = guide_legend(title = "% missings")) +
facet_wrap(~ region, nrow = 3)
print(Freq_Missing_Military)
data_question1 <- data_question1 %>%
group_by(code) %>%
mutate(
PercentageMissingByCode = mean(is.na(MilitaryExpenditurePercentGDP))
) %>%
ungroup() %>%
group_by(region) %>%
mutate(
MedianByRegion = median(MilitaryExpenditurePercentGDP, na.rm = TRUE),
MilitaryExpenditurePercentGDP = ifelse(
PercentageMissingByCode < 0.3 & !is.na(MilitaryExpenditurePercentGDP),
MilitaryExpenditurePercentGDP,
ifelse(PercentageMissingByCode < 0.3, MedianByRegion, MilitaryExpenditurePercentGDP)
)
) %>%
select(-PercentageMissingByCode, -MedianByRegion)2.3.10.1.3 Internet usage
There are only low percentage of missing values.
Code
question1_missing_Internet <- data_question1 %>%
group_by(code) %>%
summarize(NaInternet = mean(is.na(internet_usage)))%>%
filter(NaInternet != 0)We look at the evolution of the variable over time. We fill the missing values with linear interpolation, because all evolutions are in an increasing way and are almost straight lines, except for CIV that we delete.
Code
question1_missing_Internet <- data_question1 %>%
group_by(code) %>%
mutate(PercentageMissing = mean(is.na(internet_usage))) %>% # Column % NAs
filter(code %in% question1_missing_Internet$code)
Evol_Missing_Internet <- ggplot(data = question1_missing_Internet) +
geom_line(aes(x = year, y = internet_usage,
color = cut(PercentageMissing,
breaks = c(0, 0.1, 0.2, 0.3, 1),
labels = c("0-10%", "10-20%", "20-30%", "30-100%")))) +
labs(title = "Evolution of internet usage over time", x = "Years from 2000 to 2022", y = "Internet usage") +
scale_color_manual(values = c("0-10%" = "blue", "10-20%" = "green", "20-30%" = "red", "30-100%" = "black"),
labels = c("0-10%", "10-20%", "20-30%", "50-100%")) +
guides(color = guide_legend(title = "% missings")) +
scale_x_continuous(breaks=NULL)+
facet_wrap(~ code, nrow = 4)
print(Evol_Missing_Internet)
list_code <- setdiff(unique(question1_missing_Internet$code), "CIV")
for (i in list_code) {
country_data <- data_question1 %>% filter(code == i)
interpolated_data <- na.interp(country_data$internet_usage)
data_question1[data_question1$code == i, "internet_usage"] <- interpolated_data
}
data_question1 <- data_question1 %>% filter(code!="CIV")
list_country_deleted <- c(list_country_deleted, "CIV") 2.3.10.1.4 Human freedom index
First, we remove hf_score, pf_score and ef_score, because there are many missing values and since these variables summarize the other ones, deleting the will not make us loose information.
Code
data_question1 <- data_question1 %>% select(-c(hf_score, pf_score, ef_score))2.3.10.1.4.1 Personal freedom: law
The variable pf_law has (many) NAs, but only for one country: BLZ, so we decide to remove it.
Code
data_question1 <- data_question1 %>% filter(code!="BLZ")
list_country_deleted <- c(list_country_deleted, "BLZ") 2.3.10.1.4.2 Economic freedom: government
Only KGZ and SRB have missing values, we plot the values over time and fill in the missing values by the year before, since there are only one and two missing values.
Code
data_question1 %>%
filter(code %in% c("KGZ", "SRB")) %>%
ggplot(aes(x = year, y = ef_government)) +
geom_point(color = "green") +
facet_wrap(~ code, nrow = 1) +
labs(title = "Evolution of economic freedom: government over time", x = "Years", y = "ef_gov")
data_question1 <- data_question1 %>%
mutate(ef_government = ifelse(code == "KGZ" & year == 2000 & is.na(ef_government), ef_government[which(code == "KGZ" & year == 2001)], ef_government))
data_question1 <- data_question1 %>%
mutate(ef_government = ifelse(code == "SRB" & year == 2000 & is.na(ef_government), ef_government[which(code == "SRB" & year == 2002)], ef_government))
data_question1 <- data_question1 %>%
mutate(ef_government = ifelse(code == "SRB" & year == 2001 & is.na(ef_government), ef_government[which(code == "SRB" & year == 2002)], ef_government))2.3.10.1.4.3 Economic freedom: money
18 countries have missing values, but the percentage of missing values is always below 25%.
Code
question1_missing_ef_money <- data_question1 %>%
group_by(code) %>%
summarize(Na_ef_money = mean(is.na(ef_money)))%>%
filter(Na_ef_money != 0)We look at the evolution of the variable over time. For the countries where this evolution is linear, we fill in the missing values using linear interpolation.
Code
question1_missing_ef_money <- data_question1 %>%
group_by(code) %>%
mutate(PercentageMissing = mean(is.na(ef_money))) %>% # Column % NAs
filter(code %in% question1_missing_ef_money$code)
Evol_Missing_ef_money <- ggplot(data = question1_missing_ef_money) +
geom_line(aes(x = year, y = ef_money,
color = cut(PercentageMissing,
breaks = c(0, 0.1, 0.2, 0.3, 1),
labels = c("0-10%", "10-20%", "20-30%", "30-100%")))) +
labs(title = "Evolution of economic freedom: money over time", x = "Years from 2000 to 2022", y = "ef_money") +
scale_color_manual(values = c("0-10%" = "blue", "10-20%" = "green", "20-30%" = "red", "30-100%" = "black"),
labels = c("0-10%", "10-20%", "20-30%", "50-100%")) +
guides(color = guide_legend(title = "% missings")) +
facet_wrap(~ code, nrow = 4) +
scale_x_continuous(breaks = NULL)
print(Evol_Missing_ef_money)
list_code <- c("ARM", "BFA", "BIH", "GEO", "KAZ", "LSO", "MDA", "MKD")
for (i in list_code) {
country_data <- data_question1 %>% filter(code == i)
interpolated_data <- na.interp(country_data$ef_money)
data_question1[data_question1$code == i, "ef_money"] <- interpolated_data
}Then, we look at the distribution of the variable per region. Seeing that all are skewed distributions, we decide to replace the missing values using the median by region.
Code
question1_missing_ef_money <- data_question1 %>%
group_by(code) %>%
mutate(PercentageMissing = mean(is.na(ef_money))) %>% # Column % NAs
ungroup() %>%
group_by(region) %>%
filter(sum(PercentageMissing, na.rm = TRUE) > 0)
Freq_Missing_ef_money <- ggplot(data = question1_missing_ef_money) +
geom_histogram(aes(x = ef_money,
fill = cut(PercentageMissing,
breaks = c(0, 0.1, 0.2, 0.3, 1),
labels = c("0-10%", "10-20%", "20-30%", "30-100%"))),
bins = 30) +
labs(title = "Distribution of economic freedom: money", x = "ef_money", y = "Frequency") +
scale_fill_manual(values = c("0-10%" = "blue", "10-20%" = "green", "20-30%"="red","30-100%" = "black"), labels = c("0-10%", "10-20%", "20-30%","30-100%")) +
guides(fill = guide_legend(title = "% missings")) +
facet_wrap(~ region, nrow = 2)
print(Freq_Missing_ef_money)
data_question1 <- data_question1 %>%
group_by(code) %>%
mutate(
PercentageMissingByCode = mean(is.na(ef_money))
) %>%
ungroup() %>%
group_by(region) %>%
mutate(
MedianByRegion = median(ef_money, na.rm = TRUE),
ef_money = ifelse(
PercentageMissingByCode < 0.3 & !is.na(ef_money),
ef_money,
ifelse(PercentageMissingByCode < 0.3, MedianByRegion, ef_money)
)
) %>%
select(-PercentageMissingByCode, -MedianByRegion)2.3.10.1.4.4 Economic freedom: trade
19 countries have missing values, but the percentage of missing values is always below 25%.
Code
question1_missing_ef_trade <- data_question1 %>%
group_by(code) %>%
summarize(Na_ef_trade = mean(is.na(ef_trade)))%>% # Column % NAs
filter(Na_ef_trade != 0)
question1_missing_ef_trade <- data_question1 %>%
group_by(code) %>%
mutate(PercentageMissing = mean(is.na(ef_trade))) %>%
filter(code %in% question1_missing_ef_trade$code)We look at the evolution of the variable over time. For the countries where this evolution is linear, we fill in the missing values using linear interpolation.
Code
Evol_Missing_ef_trade <- ggplot(data = question1_missing_ef_trade) +
geom_line(aes(x = year, y = ef_trade,
color = cut(PercentageMissing,
breaks = c(0, 0.1, 0.2, 0.3, 1),
labels = c("0-10%", "10-20%", "20-30%", "30-100%")))) +
labs(title = "Evolution of economic freedom: trade over time", x = "Years from 2000 to 2022", y = "ef_trade") +
scale_color_manual(values = c("0-10%" = "blue", "10-20%" = "green", "20-30%" = "red", "30-100%" = "black"),
labels = c("0-10%", "10-20%", "20-30%", "50-100%")) +
guides(color = guide_legend(title = "% missings")) +
facet_wrap(~ code, nrow = 4) +
scale_x_continuous(breaks = NULL)
print(Evol_Missing_ef_trade)
# Linear interpolation for "AZE", "BFA", "ETH", "GEO", "VNH"
list_code <- c("AZE", "BFA", "ETH", "GEO", "VNH")
for (i in list_code) {
country_data <- data_question1 %>% filter(code == i)
interpolated_data <- na.interp(country_data$ef_trade)
data_question1[data_question1$code == i, "ef_trade"] <- interpolated_data
}Then, we look at the distribution of the variable per region. Seeing that all are skewed distributions, we decide to replace the missing values using the median by region.
Code
question1_missing_ef_trade <- data_question1 %>%
group_by(code) %>%
mutate(PercentageMissing = mean(is.na(ef_trade))) %>% # Column % NAs
ungroup() %>%
group_by(region) %>%
filter(sum(PercentageMissing, na.rm = TRUE) > 0)
Freq_Missing_ef_trade <- ggplot(data = question1_missing_ef_trade) +
geom_histogram(aes(x = ef_trade,
fill = cut(PercentageMissing,
breaks = c(0, 0.1, 0.2, 0.3, 1),
labels = c("0-10%", "10-20%", "20-30%", "30-100%"))),
bins = 30) +
labs(title = "Distribution of economic freedom: trade", x = "ef_trade", y = "Frequency") +
scale_fill_manual(values = c("0-10%" = "blue", "10-20%" = "green", "20-30%"="red","30-100%" = "black"), labels = c("0-10%", "10-20%", "20-30%","30-100%")) +
guides(fill = guide_legend(title = "% missings")) +
facet_wrap(~ region, nrow = 2)
print(Freq_Missing_ef_trade)
data_question1 <- data_question1 %>%
group_by(code) %>%
mutate(
PercentageMissingByCode = mean(is.na(ef_trade))
) %>%
ungroup() %>%
group_by(region) %>%
mutate(
MedianByRegion = median(ef_trade, na.rm = TRUE),
ef_trade = ifelse(
PercentageMissingByCode < 0.3 & !is.na(ef_trade),
ef_trade,
ifelse(PercentageMissingByCode < 0.3, MedianByRegion, ef_trade)
)
) %>%
select(-PercentageMissingByCode, -MedianByRegion)2.3.10.1.4.5 Economic freedom: regulation
12 countries have missing values, but the percentage of missing values is always below 25%.
Code
question1_missing_ef_regulation <- data_question1 %>%
group_by(code) %>%
summarize(Na_ef_regulation = mean(is.na(ef_regulation)))%>% # Column % NAs
filter(Na_ef_regulation != 0)We look at the evolution of the variable over time. For the countries where this evolution is linear, we fill in the missing values using linear interpolation.
Code
question1_missing_ef_regulation <- data_question1 %>%
group_by(code) %>%
mutate(PercentageMissing = mean(is.na(ef_regulation))) %>%
filter(code %in% question1_missing_ef_regulation$code)
Evol_Missing_ef_regulation <- ggplot(data = question1_missing_ef_regulation) +
geom_line(aes(x = year, y = ef_regulation,
color = cut(PercentageMissing,
breaks = c(0, 0.1, 0.2, 0.3, 1),
labels = c("0-10%", "10-20%", "20-30%", "30-100%")))) +
labs(title = "Evolution of economic freedom: regulation over time", x = "Years from 2000 to 2022", y = "ef_regulation") +
scale_color_manual(values = c("0-10%" = "blue", "10-20%" = "green", "20-30%" = "red", "30-100%" = "black"),
labels = c("0-10%", "10-20%", "20-30%", "50-100%")) +
guides(color = guide_legend(title = "% missings")) +
scale_x_continuous(breaks = NULL)+
facet_wrap(~ code, nrow = 2)
print(Evol_Missing_ef_regulation)
list_code <- c("ETH", "KAZ", "MDA", "SRB")
for (i in list_code) {
country_data <- data_question1 %>% filter(code == i)
interpolated_data <- na.interp(country_data$ef_regulation)
data_question1[data_question1$code == i, "ef_regulation"] <- interpolated_data
}Then, we look at the distribution of the variable per region. Seeing that all are skewed distributions, we decide to replace the missing values using the median by region.
Code
question1_missing_ef_regulation <- data_question1 %>%
group_by(code) %>%
mutate(PercentageMissing = mean(is.na(ef_regulation))) %>% # Column % NAs
ungroup() %>%
group_by(region) %>%
filter(sum(PercentageMissing, na.rm = TRUE) > 0)
Freq_Missing_ef_regulation <- ggplot(data = question1_missing_ef_regulation) +
geom_histogram(aes(x = ef_regulation,
fill = cut(PercentageMissing,
breaks = c(0, 0.1, 0.2, 0.3, 1),
labels = c("0-10%", "10-20%", "20-30%", "30-100%"))),
bins = 30) +
labs(title = "Distribution of economic freedom: regulation", x = "ef_regulation", y = "Frequency") +
scale_fill_manual(values = c("0-10%" = "blue", "10-20%" = "green", "20-30%"="red","30-100%" = "black"), labels = c("0-10%", "10-20%", "20-30%","30-100%")) +
guides(fill = guide_legend(title = "% missings")) +
facet_wrap(~ region, nrow = 1)
print(Freq_Missing_ef_regulation)
data_question1 <- data_question1 %>%
group_by(code) %>%
mutate(
PercentageMissingByCode = mean(is.na(ef_regulation))
) %>%
ungroup() %>%
group_by(region) %>%
mutate(
MedianByRegion = median(ef_regulation, na.rm = TRUE),
ef_regulation = ifelse(
PercentageMissingByCode < 0.3 & !is.na(ef_regulation),
ef_regulation,
ifelse(PercentageMissingByCode < 0.3, MedianByRegion, ef_regulation)
)
) %>%
select(-PercentageMissingByCode, -MedianByRegion) %>%
ungroup()Now, we notice that there were only missing values for goals 1 and 10. As we did before, we have started to investigate where are located the NAs in our dataset for first goal1, then goal 10.
Code
na_count <- sapply(data_question1, function(x) sum(is.na(x)))
na_count_df <- data.frame(variable = names(na_count), num_missing = na_count)
na_count_df_filtered <- subset(na_count_df, num_missing > 0)
ggplot(na_count_df_filtered, aes(x= num_missing,y=variable, fill = num_missing)) +
geom_bar(aes(fill = num_missing), stat = "identity", width = 0.8, fill = 'lightblue') +
geom_text(aes(label = num_missing), vjust = 0.5,hjust = 1.1, position = position_dodge(width = 0.9)) +
theme_minimal() +
theme(axis.text.y = element_text(hjust = 1, size=10 ),
legend.text = element_text(size = 8),
legend.title = element_text(size = 10)) +
labs(title = "Number of remaining missing values per variable ",
x = "Number of NAs",
y = "Variables")
# goal1
question1_missing_goal1 <- data_question1 %>%
group_by(code) %>%
summarize(Na_goal1 = mean(is.na(goal1)))%>%
filter(Na_goal1 != 0)
data_question1 <- data_question1 %>% filter(!code %in% question1_missing_goal1$code)
# Update List of countries deleted
list_country_deleted <- c(list_country_deleted, "KWT","NZL","OMN","SGP","UKR")
# still 42 NA values goal10We had found that the missing values were located in only 5 countries. So we have decided to get rid of them. At this stage, there were only 42 remaining missing values. Then, same step for goal 10.
Code
#goal10
question1_missing_goal10 <- data_question1 %>%
group_by(code) %>%
summarize(Na_goal10 = mean(is.na(goal10)))%>%
filter(Na_goal10 != 0)
data_question1 <- data_question1 %>% filter(!code %in% question1_missing_goal10$code)
# Update List of countries deleted
list_country_deleted <- c(list_country_deleted, "GUY","TTO")We have found the 2 lasts contries containing missing values. Now, our dataset is completely clean and ready to be used for our question 1.
2.3.10.2 Data for question 2 and 4
We create a column with the number of missing values by country over all the variables, except goal 1 and goal 10 that we already discussed. Since there are no other missing values, we stop here.
Code
see_missing24 <- data_question24 %>%
group_by(code) %>%
summarise(across(everything(), ~ sum(is.na(.))) %>%
mutate(num_missing = rowSums(across(everything()))) %>%
filter(num_missing > 0))2.3.10.3 Data for question 3
We create a column with the number of missing values by country over all the variables, except goal 1 and goal 10 that we already discussed. Since there are no other missing values, we stop here.
Disasters
We begin by visualizing the missing values.
Code
variable_names <- names(data_question3_1)
missing_percentages <- sapply(data_question3_1, function(col) mean(is.na(col)) * 100)
missing_data_summary <- data.frame(
Variable = variable_names,
Missing_Percentage = missing_percentages
)
missing_data_summary <- missing_data_summary %>%
mutate(VariableGroup = ifelse(startsWith(Variable, "goal") & Missing_Percentage == 0, "Goals without NAs", as.character(Variable)))
ggplot(data = missing_data_summary, aes(x = reorder(VariableGroup, Missing_Percentage), y = Missing_Percentage, fill = Missing_Percentage)) +
geom_bar(stat = "identity") +
geom_text(aes(label = ifelse(Missing_Percentage > 1, sprintf("%.1f%%", Missing_Percentage), ""),
y = Missing_Percentage),
position = position_stack(vjust = 1), # Adjust vertical position
color = "white", # Text color
size = 3, # Text size
hjust = 1.05) +
labs(title = "Percentage of Missing Values by Variable",
x = "Variable",
y = "Missing Percentage") +
theme_minimal() +
theme(axis.text.y = element_text(hjust = 1)) +
coord_flip()We create a column with the number of missing values by country over all the variables, except goal 1 and goal 10 that we already discussed. We find out that there are many missing values and here are the first few lines identifying them by country.
Code
see_missing3_1 <- data_question3_1 %>%
group_by(code) %>%
summarise(across(-c(goal1, goal10), # Exclude columns "goal1" and "goal10"
~ sum(is.na(.))) %>%
mutate(num_missing = rowSums(across(everything()))) %>%
filter(num_missing > 0))
for_kable <- head(see_missing3_1, 10)
kable(for_kable)| code | year | country | continent | region | overallscore | goal2 | goal3 | goal4 | goal5 | goal6 | goal7 | goal8 | goal9 | goal11 | goal12 | goal13 | goal15 | goal16 | total_deaths | no_injured | no_affected | no_homeless | total_affected | total_damages | num_missing |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| AGO | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 6 |
| ALB | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 | 9 | 9 | 9 | 9 | 9 | 54 |
| ARE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 21 | 21 | 21 | 21 | 21 | 21 | 126 |
| ARM | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 15 | 15 | 15 | 15 | 15 | 15 | 90 |
| AUT | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 | 8 | 8 | 8 | 8 | 8 | 48 |
| AZE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 17 | 17 | 17 | 17 | 17 | 17 | 102 |
| BDI | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 3 | 3 | 3 | 3 | 3 | 18 |
| BEL | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 | 5 | 5 | 5 | 5 | 5 | 30 |
| BEN | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 | 7 | 7 | 7 | 7 | 7 | 42 |
| BFA | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 | 5 | 5 | 5 | 5 | 5 | 30 |
In this particular case, even if there are many missing values in our disaster dataset, we made the hypothesis that disaster events can not happen every year for every country given that these are uncontrollable and non-recurring events. Therefore the NAs that we encounter will become zeroes, implying that there have been no climatic disasters.
Code
data_question3_1[is.na(data_question3_1)] <- 0COVID19
We create a column with the number of missing values by country over all the variables, except goal 1 and goal 10 that we already discussed. Since there are no other missing values, we stop here.
Code
see_missing3_2 <- data_question3_2 %>%
group_by(code) %>%
summarise(across(-c(goal1, goal10), # Exclude columns "goal1" and "goal10"
~ sum(is.na(.))) %>%
mutate(num_missing = rowSums(across(everything()))) %>%
filter(num_missing > 0))Conflicts
We create a column with the number of missing values by country over all the variables, except goal 1 and goal 10 that we already discussed.Two countries have missing values, we remove them (MNE and SRB).
Code
see_missing3_3 <- data_question3_3 %>%
group_by(code) %>%
summarise(across(-c(goal1, goal10), # Exclude columns "goal1" and "goal10"
~ sum(is.na(.))) %>%
mutate(num_missing = rowSums(across(everything()))) %>%
filter(num_missing > 0))
data_question3_3 <- data_question3_3 %>% filter(!code %in% c("MNE","SRB"))
##### EXPORT as CSV #####
write.csv(data_question1, file = here("scripts","data","data_question1.csv"))
write.csv(data_question24, file = here("scripts","data","data_question24.csv"))
write.csv(data_question3_1, file = here("scripts","data","data_question3_1.csv"))
write.csv(data_question3_2, file = here("scripts","data","data_question3_2.csv"))
write.csv(data_question3_3, file = here("scripts","data","data_question3_3.csv"))3 Exploratory data analysis
3.1 General exploration
We display the distribution of the different SDG achievement scores, using boxplots to have an overview of the median, the range with most of the observations and the outliers.
Code
data_question1 <- read.csv(here("scripts","data","data_question1.csv"))
data_question24 <- read.csv(here("scripts", "data", "data_question24.csv"))
data_question2 <- read.csv(here("scripts", "data", "data_question24.csv"))
data_question3_1 <- read.csv(here("scripts", "data", "data_question3_1.csv"))
data_question3_2 <- read.csv(here("scripts", "data", "data_question3_2.csv"))
data_question3_3 <- read.csv(here("scripts", "data", "data_question3_3.csv"))
Q3.1 <- read.csv(here("scripts", "data", "data_question3_1.csv"))
Q3.2 <- read.csv(here("scripts", "data", "data_question3_2.csv"))
Q3.3 <- read.csv(here("scripts", "data", "data_question3_3.csv"))
data <- read.csv(here("scripts", "data", "all_Merge.csv"))
Correlation_overall <- data_question1 %>%
select(population:ef_regulation)
#### boxplots ####
#for goals
#dev.off()
boxplot(Correlation_overall[2:18],
las = 2, # Makes the axis labels perpendicular to the axis
par(mar = c(5, 4, 4, 2) + 0.1), # Adjusts the margins to fit all labels
cex.axis = 0.7, # Reduces the size of the axis labels
cex.lab = 1, # Reduces the size of the x and y labels
notch = TRUE, # Specifies whether to add notches or not
main = "Merged goals boxplot", # Title of the boxplot
xlab = "Goals", # X-axis label
ylab = "Score") # Y-axis labelWe see different schemes among the different goals. Indeed some are quite homogeneous with a small spread of values (e.g. overall score, goals 2 and 8) while others have a large spread of values (e.g. goals 1 and 10). Goals 1, 3, 4, 7, 9, 10 and 13 have values across all possible percentages. Goals 2, 5, 8, 13 and 17 have extreme values situated outside the 95% confidence interval. It is interesting to see that goal 8 (decent work and economic growth) is the one with smaller spread of values, whereas goal 1 (no poverty) have the higher distance between the first and the third quartile. Goal 2 (no hunger) has a tight spread of values, but with the greater amount of outliers in the smaller values, meaning hunger is similar across most countries, but when it differs it is in very bad manner.
We now display boxplpots for the different variables of the human freedom index, and then also for our other independent variables.
Code
#for Human Freedom Index scores
boxplot(Correlation_overall[23:34],
las = 2, # Makes the axis labels perpendicular to the axis
par(mar = c(7, 5, 2, 1)), # Adjusts the margins to fit all labels
cex.axis = 0.7, # Reduces the size of the axis labels
cex.lab = 1, # Reduces the size of the x and y labels
notch = TRUE, # Specifies whether to add notches or not
main = "Merged Human Freedom Index scores boxplot", # Title of the boxplot
ylab = "Score") # Y-axis label
ggplot(Correlation_overall, aes(x= factor(1), y= GDPpercapita)) +
geom_violin(trim=FALSE, fill="orange")+
labs(title="Violin plot of GDP per capita",x="GDP per capita", y = "Distribution")+
geom_boxplot(width=0.1, outlier.size = 1)+
scale_y_continuous(labels = scales::label_number()) + # Format y-axis labels
theme_classic()
ggplot(Correlation_overall, aes(x= factor(1), y= unemployment.rate)) +
geom_violin(trim=FALSE, fill="orange")+
labs(title="Violin plot of unemployment rate",x="Unemployment rate", y = "Distribution")+
geom_boxplot(width=0.1, outlier.size = 1)+
scale_y_continuous(labels = scales::label_number()) + # Format y-axis labels
theme_classic()
ggplot(Correlation_overall, aes(x= factor(1), y= MilitaryExpenditurePercentGDP)) +
geom_violin(trim=FALSE, fill="orange")+
labs(title="Violin plot of military expenditure by percentage of GDP",x="Military Expenditure", y = "Distribution")+
geom_boxplot(width=0.1, outlier.size = 1)+
scale_y_continuous(labels = scales::label_number()) + # Format y-axis labels
theme_classic()
ggplot(Correlation_overall, aes(x= factor(1), y= internet_usage)) +
geom_violin(trim=FALSE, fill="orange")+
labs(title="Violin plot of internet_usage",x="internet_usage", y = "Distribution")+
geom_boxplot(width=0.1, outlier.size = 1)+
scale_y_continuous(labels = scales::label_number()) + # Format y-axis labels
theme_classic()We now look at the variables in a summary table to have a more precise view of the numbers.
| X | code | year | country | continent | region | overallscore | goal1 | goal2 | goal3 | goal4 | goal5 | goal6 | goal7 | goal8 | goal9 | goal10 | goal11 | goal12 | goal13 | goal15 | goal16 | goal17 | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Min. : 1 | Length:3565 | Min. :2000 | Length:3565 | Length:3565 | Length:3565 | Min. :37.4 | Min. : 0.0 | Min. :16.5 | Min. : 5.9 | Min. : 0.0 | Min. : 3.5 | Min. :23.3 | Min. : 0.1 | Min. :40.0 | Min. : 0.3 | Min. : 0.0 | Min. :20.3 | Min. :32.9 | Min. : 0.0 | Min. :26.0 | Min. :27.9 | Min. :15.1 | |
| 1st Qu.: 892 | Class :character | 1st Qu.:2005 | Class :character | Class :character | Class :character | 1st Qu.:55.0 | 1st Qu.: 44.5 | 1st Qu.:52.6 | 1st Qu.:44.3 | 1st Qu.: 55.6 | 1st Qu.:43.2 | 1st Qu.:53.0 | 1st Qu.:41.5 | 1st Qu.:64.0 | 1st Qu.:15.5 | 1st Qu.: 35.2 | 1st Qu.:55.8 | 1st Qu.:67.9 | 1st Qu.:72.9 | 1st Qu.:55.0 | 1st Qu.:51.5 | 1st Qu.:46.1 | |
| Median :1783 | Mode :character | Median :2011 | Mode :character | Mode :character | Mode :character | Median :65.5 | Median : 87.4 | Median :58.9 | Median :70.9 | Median : 80.6 | Median :58.0 | Median :65.3 | Median :65.5 | Median :70.2 | Median :29.4 | Median : 62.2 | Median :75.3 | Median :84.6 | Median :90.8 | Median :65.1 | Median :61.4 | Median :55.4 | |
| Mean :1783 | NA | Mean :2011 | NA | NA | NA | Mean :64.0 | Mean : 71.7 | Mean :58.0 | Mean :64.1 | Mean : 72.0 | Mean :56.0 | Mean :65.0 | Mean :57.9 | Mean :70.0 | Mean :37.5 | Mean : 58.3 | Mean :70.3 | Mean :79.3 | Mean :82.1 | Mean :65.0 | Mean :62.6 | Mean :55.7 | |
| 3rd Qu.:2674 | NA | 3rd Qu.:2017 | NA | NA | NA | 3rd Qu.:72.4 | 3rd Qu.: 98.8 | 3rd Qu.:65.3 | 3rd Qu.:81.4 | 3rd Qu.: 94.5 | 3rd Qu.:68.9 | 3rd Qu.:75.2 | 3rd Qu.:72.6 | 3rd Qu.:76.6 | 3rd Qu.:53.9 | 3rd Qu.: 81.6 | 3rd Qu.:85.1 | 3rd Qu.:94.1 | 3rd Qu.:97.2 | 3rd Qu.:74.3 | 3rd Qu.:74.6 | 3rd Qu.:65.1 | |
| Max. :3565 | NA | Max. :2022 | NA | NA | NA | Max. :86.8 | Max. :100.0 | Max. :83.4 | Max. :97.3 | Max. :100.0 | Max. :94.0 | Max. :95.1 | Max. :99.6 | Max. :88.7 | Max. :99.2 | Max. :100.0 | Max. :99.1 | Max. :99.0 | Max. :99.9 | Max. :97.9 | Max. :96.0 | Max. :96.8 | |
| NA | NA | NA | NA | NA | NA | NA | NA's :276 | NA | NA | NA | NA | NA | NA | NA | NA | NA's :276 | NA | NA | NA | NA | NA | NA |
3.2 Focus on the influence of the factors over the SDG scores
After importing our our cleaned data, we looked first at the correlations between our numerical variables.
Code
#### Correlations between variables ####
Correlation_overall <- data_question1 %>%
select(population:ef_regulation)
cor_matrix <- cor(Correlation_overall, use = "everything")
kable(cor_matrix)| population | overallscore | goal1 | goal2 | goal3 | goal4 | goal5 | goal6 | goal7 | goal8 | goal9 | goal10 | goal11 | goal12 | goal13 | goal15 | goal16 | goal17 | unemployment.rate | GDPpercapita | MilitaryExpenditurePercentGDP | internet_usage | pf_law | pf_security | pf_movement | pf_religion | pf_assembly | pf_expression | pf_identity | ef_government | ef_legal | ef_money | ef_trade | ef_regulation | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| population | 1.000 | -0.042 | -0.020 | 0.123 | -0.005 | 0.073 | -0.023 | -0.066 | -0.019 | 0.032 | 0.056 | -0.102 | -0.111 | 0.109 | 0.046 | -0.237 | -0.128 | -0.150 | -0.088 | -0.054 | 0.067 | -0.045 | -0.127 | -0.046 | -0.228 | -0.321 | -0.205 | -0.115 | -0.001 | -0.032 | -0.026 | -0.046 | -0.128 | -0.119 |
| overallscore | -0.042 | 1.000 | 0.890 | 0.676 | 0.943 | 0.873 | 0.685 | 0.905 | 0.901 | 0.728 | 0.835 | 0.562 | 0.886 | -0.767 | -0.610 | 0.186 | 0.814 | 0.572 | 0.160 | 0.617 | -0.013 | 0.805 | 0.699 | 0.514 | 0.456 | 0.218 | 0.414 | 0.510 | 0.626 | -0.213 | 0.785 | 0.624 | 0.686 | 0.548 |
| goal1 | -0.020 | 0.890 | 1.000 | 0.518 | 0.894 | 0.822 | 0.448 | 0.796 | 0.867 | 0.562 | 0.672 | 0.480 | 0.799 | -0.685 | -0.566 | 0.014 | 0.672 | 0.503 | 0.225 | 0.468 | 0.113 | 0.656 | 0.564 | 0.418 | 0.341 | 0.077 | 0.286 | 0.366 | 0.504 | -0.085 | 0.600 | 0.521 | 0.612 | 0.458 |
| goal2 | 0.123 | 0.676 | 0.518 | 1.000 | 0.640 | 0.580 | 0.487 | 0.634 | 0.575 | 0.575 | 0.617 | 0.290 | 0.548 | -0.459 | -0.360 | 0.046 | 0.521 | 0.258 | 0.005 | 0.411 | -0.103 | 0.539 | 0.427 | 0.418 | 0.262 | 0.140 | 0.242 | 0.347 | 0.392 | -0.156 | 0.505 | 0.454 | 0.455 | 0.292 |
| goal3 | -0.005 | 0.943 | 0.894 | 0.640 | 1.000 | 0.851 | 0.606 | 0.875 | 0.879 | 0.702 | 0.812 | 0.487 | 0.873 | -0.778 | -0.641 | 0.026 | 0.774 | 0.497 | 0.115 | 0.627 | 0.016 | 0.769 | 0.663 | 0.473 | 0.424 | 0.176 | 0.365 | 0.467 | 0.603 | -0.151 | 0.737 | 0.649 | 0.701 | 0.520 |
| goal4 | 0.073 | 0.873 | 0.822 | 0.580 | 0.851 | 1.000 | 0.674 | 0.781 | 0.822 | 0.620 | 0.678 | 0.315 | 0.832 | -0.659 | -0.544 | -0.007 | 0.626 | 0.451 | 0.109 | 0.482 | -0.026 | 0.628 | 0.563 | 0.384 | 0.401 | 0.194 | 0.348 | 0.417 | 0.597 | -0.054 | 0.687 | 0.544 | 0.635 | 0.481 |
| goal5 | -0.023 | 0.685 | 0.448 | 0.487 | 0.606 | 0.674 | 1.000 | 0.653 | 0.568 | 0.561 | 0.656 | 0.187 | 0.674 | -0.613 | -0.528 | 0.187 | 0.538 | 0.464 | 0.015 | 0.539 | -0.171 | 0.634 | 0.555 | 0.241 | 0.432 | 0.304 | 0.351 | 0.441 | 0.595 | -0.235 | 0.692 | 0.493 | 0.498 | 0.491 |
| goal6 | -0.066 | 0.905 | 0.796 | 0.634 | 0.875 | 0.781 | 0.653 | 1.000 | 0.830 | 0.706 | 0.793 | 0.426 | 0.823 | -0.774 | -0.620 | 0.163 | 0.755 | 0.490 | 0.133 | 0.624 | -0.089 | 0.735 | 0.686 | 0.431 | 0.501 | 0.337 | 0.485 | 0.568 | 0.686 | -0.173 | 0.775 | 0.607 | 0.679 | 0.511 |
| goal7 | -0.019 | 0.901 | 0.867 | 0.575 | 0.879 | 0.822 | 0.568 | 0.830 | 1.000 | 0.563 | 0.687 | 0.382 | 0.841 | -0.653 | -0.504 | 0.032 | 0.666 | 0.514 | 0.228 | 0.474 | 0.024 | 0.681 | 0.582 | 0.393 | 0.398 | 0.169 | 0.353 | 0.406 | 0.586 | -0.080 | 0.634 | 0.545 | 0.639 | 0.429 |
| goal8 | 0.032 | 0.728 | 0.562 | 0.575 | 0.702 | 0.620 | 0.561 | 0.706 | 0.563 | 1.000 | 0.757 | 0.446 | 0.637 | -0.709 | -0.612 | 0.158 | 0.656 | 0.272 | -0.243 | 0.642 | -0.152 | 0.646 | 0.648 | 0.496 | 0.475 | 0.371 | 0.493 | 0.617 | 0.547 | -0.177 | 0.729 | 0.526 | 0.599 | 0.443 |
| goal9 | 0.056 | 0.835 | 0.672 | 0.617 | 0.812 | 0.678 | 0.656 | 0.793 | 0.687 | 0.757 | 1.000 | 0.517 | 0.723 | -0.837 | -0.744 | 0.159 | 0.772 | 0.444 | 0.025 | 0.812 | -0.002 | 0.891 | 0.766 | 0.486 | 0.397 | 0.253 | 0.398 | 0.542 | 0.506 | -0.321 | 0.829 | 0.604 | 0.611 | 0.549 |
| goal10 | -0.102 | 0.562 | 0.480 | 0.290 | 0.487 | 0.315 | 0.187 | 0.426 | 0.382 | 0.446 | 0.517 | 1.000 | 0.336 | -0.542 | -0.502 | 0.253 | 0.584 | 0.152 | -0.076 | 0.473 | 0.039 | 0.489 | 0.468 | 0.539 | 0.128 | -0.053 | 0.112 | 0.241 | 0.181 | -0.286 | 0.436 | 0.291 | 0.293 | 0.310 |
| goal11 | -0.111 | 0.886 | 0.799 | 0.548 | 0.873 | 0.832 | 0.674 | 0.823 | 0.841 | 0.637 | 0.723 | 0.336 | 1.000 | -0.720 | -0.590 | 0.040 | 0.741 | 0.514 | 0.175 | 0.564 | -0.004 | 0.693 | 0.668 | 0.388 | 0.487 | 0.269 | 0.405 | 0.485 | 0.627 | -0.177 | 0.751 | 0.590 | 0.678 | 0.552 |
| goal12 | 0.109 | -0.767 | -0.685 | -0.459 | -0.778 | -0.659 | -0.613 | -0.774 | -0.653 | -0.709 | -0.837 | -0.542 | -0.720 | 1.000 | 0.887 | -0.236 | -0.818 | -0.375 | -0.038 | -0.849 | 0.049 | -0.736 | -0.851 | -0.522 | -0.505 | -0.361 | -0.478 | -0.648 | -0.544 | 0.301 | -0.838 | -0.583 | -0.683 | -0.572 |
| goal13 | 0.046 | -0.610 | -0.566 | -0.360 | -0.641 | -0.544 | -0.528 | -0.620 | -0.504 | -0.612 | -0.744 | -0.502 | -0.590 | 0.887 | 1.000 | -0.151 | -0.682 | -0.311 | 0.027 | -0.796 | -0.067 | -0.647 | -0.726 | -0.432 | -0.352 | -0.213 | -0.283 | -0.480 | -0.406 | 0.301 | -0.723 | -0.465 | -0.530 | -0.514 |
| goal15 | -0.237 | 0.186 | 0.014 | 0.046 | 0.026 | -0.007 | 0.187 | 0.163 | 0.032 | 0.158 | 0.159 | 0.253 | 0.040 | -0.236 | -0.151 | 1.000 | 0.236 | 0.111 | 0.160 | 0.182 | -0.095 | 0.235 | 0.244 | 0.123 | 0.217 | 0.252 | 0.236 | 0.260 | 0.188 | -0.240 | 0.220 | 0.112 | 0.136 | 0.144 |
| goal16 | -0.128 | 0.814 | 0.672 | 0.521 | 0.774 | 0.626 | 0.538 | 0.755 | 0.666 | 0.656 | 0.772 | 0.584 | 0.741 | -0.818 | -0.682 | 0.236 | 1.000 | 0.484 | 0.195 | 0.696 | 0.002 | 0.689 | 0.842 | 0.638 | 0.478 | 0.300 | 0.471 | 0.618 | 0.502 | -0.347 | 0.840 | 0.568 | 0.645 | 0.570 |
| goal17 | -0.150 | 0.572 | 0.503 | 0.258 | 0.497 | 0.451 | 0.464 | 0.490 | 0.514 | 0.272 | 0.444 | 0.152 | 0.514 | -0.375 | -0.311 | 0.111 | 0.484 | 1.000 | 0.358 | 0.334 | 0.134 | 0.442 | 0.383 | 0.167 | 0.343 | 0.185 | 0.307 | 0.320 | 0.386 | -0.257 | 0.490 | 0.342 | 0.346 | 0.319 |
| unemployment.rate | -0.088 | 0.160 | 0.225 | 0.005 | 0.115 | 0.109 | 0.015 | 0.133 | 0.228 | -0.243 | 0.025 | -0.076 | 0.175 | -0.038 | 0.027 | 0.160 | 0.195 | 0.358 | 1.000 | -0.086 | 0.177 | 0.050 | 0.176 | 0.037 | 0.127 | 0.056 | 0.158 | 0.072 | 0.095 | -0.183 | 0.098 | 0.046 | 0.121 | 0.123 |
| GDPpercapita | -0.054 | 0.617 | 0.468 | 0.411 | 0.627 | 0.482 | 0.539 | 0.624 | 0.474 | 0.642 | 0.812 | 0.473 | 0.564 | -0.849 | -0.796 | 0.182 | 0.696 | 0.334 | -0.086 | 1.000 | -0.068 | 0.719 | 0.745 | 0.445 | 0.397 | 0.317 | 0.396 | 0.557 | 0.406 | -0.311 | 0.756 | 0.492 | 0.507 | 0.493 |
| MilitaryExpenditurePercentGDP | 0.067 | -0.013 | 0.113 | -0.103 | 0.016 | -0.026 | -0.171 | -0.089 | 0.024 | -0.152 | -0.002 | 0.039 | -0.004 | 0.049 | -0.067 | -0.095 | 0.002 | 0.134 | 0.177 | -0.068 | 1.000 | -0.026 | -0.029 | -0.103 | -0.306 | -0.302 | -0.316 | -0.262 | -0.191 | -0.163 | -0.096 | -0.079 | -0.087 | -0.078 |
| internet_usage | -0.045 | 0.805 | 0.656 | 0.539 | 0.769 | 0.628 | 0.634 | 0.735 | 0.681 | 0.646 | 0.891 | 0.489 | 0.693 | -0.736 | -0.647 | 0.235 | 0.689 | 0.442 | 0.050 | 0.719 | -0.026 | 1.000 | 0.654 | 0.455 | 0.355 | 0.204 | 0.332 | 0.429 | 0.469 | -0.255 | 0.712 | 0.581 | 0.553 | 0.566 |
| pf_law | -0.127 | 0.699 | 0.564 | 0.427 | 0.663 | 0.563 | 0.555 | 0.686 | 0.582 | 0.648 | 0.766 | 0.468 | 0.668 | -0.851 | -0.726 | 0.244 | 0.842 | 0.383 | 0.176 | 0.745 | -0.029 | 0.654 | 1.000 | 0.575 | 0.594 | 0.476 | 0.567 | 0.710 | 0.498 | -0.334 | 0.852 | 0.538 | 0.668 | 0.625 |
| pf_security | -0.046 | 0.514 | 0.418 | 0.418 | 0.473 | 0.384 | 0.241 | 0.431 | 0.393 | 0.496 | 0.486 | 0.539 | 0.388 | -0.522 | -0.432 | 0.123 | 0.638 | 0.167 | 0.037 | 0.445 | -0.103 | 0.455 | 0.575 | 1.000 | 0.377 | 0.162 | 0.301 | 0.422 | 0.238 | -0.285 | 0.530 | 0.326 | 0.420 | 0.352 |
| pf_movement | -0.228 | 0.456 | 0.341 | 0.262 | 0.424 | 0.401 | 0.432 | 0.501 | 0.398 | 0.475 | 0.397 | 0.128 | 0.487 | -0.505 | -0.352 | 0.217 | 0.478 | 0.343 | 0.127 | 0.397 | -0.306 | 0.355 | 0.594 | 0.377 | 1.000 | 0.737 | 0.781 | 0.768 | 0.508 | 0.010 | 0.586 | 0.449 | 0.607 | 0.488 |
| pf_religion | -0.321 | 0.218 | 0.077 | 0.140 | 0.176 | 0.194 | 0.304 | 0.337 | 0.169 | 0.371 | 0.253 | -0.053 | 0.269 | -0.361 | -0.213 | 0.252 | 0.300 | 0.185 | 0.056 | 0.317 | -0.302 | 0.204 | 0.476 | 0.162 | 0.737 | 1.000 | 0.846 | 0.754 | 0.411 | 0.098 | 0.412 | 0.293 | 0.430 | 0.333 |
| pf_assembly | -0.205 | 0.414 | 0.286 | 0.242 | 0.365 | 0.348 | 0.351 | 0.485 | 0.353 | 0.493 | 0.398 | 0.112 | 0.405 | -0.478 | -0.283 | 0.236 | 0.471 | 0.307 | 0.158 | 0.396 | -0.316 | 0.332 | 0.567 | 0.301 | 0.781 | 0.846 | 1.000 | 0.888 | 0.452 | 0.085 | 0.554 | 0.437 | 0.560 | 0.428 |
| pf_expression | -0.115 | 0.510 | 0.366 | 0.347 | 0.467 | 0.417 | 0.441 | 0.568 | 0.406 | 0.617 | 0.542 | 0.241 | 0.485 | -0.648 | -0.480 | 0.260 | 0.618 | 0.320 | 0.072 | 0.557 | -0.262 | 0.429 | 0.710 | 0.422 | 0.768 | 0.754 | 0.888 | 1.000 | 0.472 | -0.090 | 0.690 | 0.484 | 0.612 | 0.471 |
| pf_identity | -0.001 | 0.626 | 0.504 | 0.392 | 0.603 | 0.597 | 0.595 | 0.686 | 0.586 | 0.547 | 0.506 | 0.181 | 0.627 | -0.544 | -0.406 | 0.188 | 0.502 | 0.386 | 0.095 | 0.406 | -0.191 | 0.469 | 0.498 | 0.238 | 0.508 | 0.411 | 0.452 | 0.472 | 1.000 | -0.070 | 0.574 | 0.425 | 0.540 | 0.342 |
| ef_government | -0.032 | -0.213 | -0.085 | -0.156 | -0.151 | -0.054 | -0.235 | -0.173 | -0.080 | -0.177 | -0.321 | -0.286 | -0.177 | 0.301 | 0.301 | -0.240 | -0.347 | -0.257 | -0.183 | -0.311 | -0.163 | -0.255 | -0.334 | -0.285 | 0.010 | 0.098 | 0.085 | -0.090 | -0.070 | 1.000 | -0.259 | -0.007 | -0.006 | -0.019 |
| ef_legal | -0.026 | 0.785 | 0.600 | 0.505 | 0.737 | 0.687 | 0.692 | 0.775 | 0.634 | 0.729 | 0.829 | 0.436 | 0.751 | -0.838 | -0.723 | 0.220 | 0.840 | 0.490 | 0.098 | 0.756 | -0.096 | 0.712 | 0.852 | 0.530 | 0.586 | 0.412 | 0.554 | 0.690 | 0.574 | -0.259 | 1.000 | 0.604 | 0.694 | 0.676 |
| ef_money | -0.046 | 0.624 | 0.521 | 0.454 | 0.649 | 0.544 | 0.493 | 0.607 | 0.545 | 0.526 | 0.604 | 0.291 | 0.590 | -0.583 | -0.465 | 0.112 | 0.568 | 0.342 | 0.046 | 0.492 | -0.079 | 0.581 | 0.538 | 0.326 | 0.449 | 0.293 | 0.437 | 0.484 | 0.425 | -0.007 | 0.604 | 1.000 | 0.742 | 0.553 |
| ef_trade | -0.128 | 0.686 | 0.612 | 0.455 | 0.701 | 0.635 | 0.498 | 0.679 | 0.639 | 0.599 | 0.611 | 0.293 | 0.678 | -0.683 | -0.530 | 0.136 | 0.645 | 0.346 | 0.121 | 0.507 | -0.087 | 0.553 | 0.668 | 0.420 | 0.607 | 0.430 | 0.560 | 0.612 | 0.540 | -0.006 | 0.694 | 0.742 | 1.000 | 0.628 |
| ef_regulation | -0.119 | 0.548 | 0.458 | 0.292 | 0.520 | 0.481 | 0.491 | 0.511 | 0.429 | 0.443 | 0.549 | 0.310 | 0.552 | -0.572 | -0.514 | 0.144 | 0.570 | 0.319 | 0.123 | 0.493 | -0.078 | 0.566 | 0.625 | 0.352 | 0.488 | 0.333 | 0.428 | 0.471 | 0.342 | -0.019 | 0.676 | 0.553 | 0.628 | 1.000 |
By doing so, we obtain a lot of positive and negative correlations. To help us to better understand and having a overall vision of the situation, we used the following heatmap.
Code
#### Heatmap ####
cor_melted <- melt(cor_matrix)
ggplot(data = cor_melted, aes(Var1, Var2, fill = value)) +
geom_tile() +
scale_fill_gradient2(low = "blue", high = "red", mid = "white",
midpoint = 0, limit = c(-1, 1), space = "Lab",
name="Pearson\nCorrelation") +
theme_minimal() +
theme(axis.text.x = element_text(angle = 45, vjust = 1, size = 8, hjust = 1),
axis.text.y = element_text(size = 8)) +
coord_fixed() +
labs(x = '', y = '', title = 'Correlation Matrix Heatmap')In the correlation matrix heatmap, we can notice that many goals from 1 to 11 are actually positively correlated together. On another hand, the goals 12 and 13 are have negative relationships with the majority of our variables, except between themself, whereas they are strongely correlated. In addition, we can notice another strongly correlation between personal freedom variables (pf) related to the scores given by the Human Freedom Index on movement, religion, assembly and expression.
In order to have an overview of the relationship between our independent variables and the SDG overall score, we make several graphs containing the Pearson correlation coefficient between the variable, the scatter plots describing the relationship between the variables, as well as the distribution of each variable.
Code
#### Pearson's correlation coeff ####
panel.hist <- function(x, ...){
usr <- par("usr"); on.exit(par(usr))
par(usr = c(usr[1:2], 0, 1.5) )
h <- hist(x, plot = FALSE)
breaks <- h$breaks; nB <- length(breaks)
y <- h$counts; y <- y/max(y)
rect(breaks[-nB], 0, breaks[-1], y, col = "lightgreen", ...)
}
panel.cor <- function(x, y, digits = 2, prefix = "", cex.cor, ...){
usr <- par("usr"); on.exit(par(usr))
par(usr = c(0, 1, 0, 1))
r <- (cor(x, y))
txt <- format(c(r, 0.123456789), digits = digits)[1]
txt <- paste0(prefix, txt)
if(missing(cex.cor)) cex.cor <- 0.8/strwidth(txt)
text(0.5, 0.5, txt, cex = cex.cor * r)
}
# Independent variables
pairs(data_question1[,c("overallscore", "unemployment.rate", "GDPpercapita", "MilitaryExpenditurePercentGDP", "internet_usage")], upper.panel=panel.cor, diag.panel=panel.hist, main="Correlation table and distribution of various variables")The overall SDG achievement score is highly correlated with the percentage of people using the internet (r=.79) and GDP per capita (r=.60). The unemployement rate as well as the military expenditures in percentage of GDP per capita do not seem to play a role. However, this is only for the overall score.
Code
pairs(data_question1[,c("overallscore", "pf_law", "pf_security", "pf_movement", "pf_religion", "pf_assembly", "pf_expression", "pf_identity")], upper.panel=panel.cor, diag.panel=panel.hist, main="Correlation table and distribution of personal freedom variables")The overall SDG achievement score is highly correlated with “personal freedom: law” (p=.69) and “personal freedom: identity” (p=.62). The other dimensions of personal freedom do not seem to have important influence. Regarding the distribution of the personal freedom variables, we notice that except for law, all have right-skewed distributions meaning that most of the countries have high scores.
Code
pairs(data_question1[,c("overallscore", "ef_government", "ef_legal", "ef_money", "ef_trade", "ef_regulation")], upper.panel=panel.cor, diag.panel=panel.hist, main="Correlation table and distribution of economic freedom variables")The overall SDG achievement score is highly correlated with “economical freedom: legal” (p=.77), “economical trade: legal” (p=.67) and “economical freedom: money” (p=.6), while the other dimensions of economic freedom do not seem to have important influence. Regarding the distribution of the economic freedom variables, we notice more heterogeneous distributions and scores across the various countries than for personal freedom.
Code
#### PCA ####
# for goals
myPCA_g <- PCA(data_question1[,9:20])
summary(myPCA_g)
#>
#> Call:
#> PCA(X = data_question1[, 9:20])
#>
#>
#> Eigenvalues
#> Dim.1 Dim.2 Dim.3 Dim.4 Dim.5 Dim.6
#> Variance 8.290 0.962 0.774 0.585 0.361 0.272
#> % of var. 69.087 8.018 6.453 4.878 3.010 2.270
#> Cumulative % of var. 69.087 77.105 83.558 88.436 91.446 93.715
#> Dim.7 Dim.8 Dim.9 Dim.10 Dim.11 Dim.12
#> Variance 0.186 0.153 0.147 0.111 0.094 0.063
#> % of var. 1.552 1.275 1.227 0.925 0.782 0.524
#> Cumulative % of var. 95.267 96.542 97.769 98.694 99.476 100.000
#>
#> Individuals (the 10 first)
#> Dist Dim.1 ctr cos2 Dim.2 ctr cos2
#> 1 | 4.712 | -4.466 0.096 0.899 | 0.204 0.002 0.002 |
#> 2 | 4.680 | -4.435 0.095 0.898 | 0.214 0.002 0.002 |
#> 3 | 4.591 | -4.361 0.092 0.902 | 0.246 0.003 0.003 |
#> 4 | 4.526 | -4.301 0.089 0.903 | 0.273 0.003 0.004 |
#> 5 | 4.479 | -4.255 0.087 0.903 | 0.294 0.004 0.004 |
#> 6 | 4.380 | -4.167 0.084 0.905 | 0.335 0.005 0.006 |
#> 7 | 4.339 | -4.139 0.083 0.910 | 0.344 0.005 0.006 |
#> 8 | 4.264 | -4.067 0.080 0.910 | 0.360 0.005 0.007 |
#> 9 | 3.826 | -3.662 0.065 0.916 | -0.068 0.000 0.000 |
#> 10 | 3.721 | -3.539 0.060 0.904 | -0.012 0.000 0.000 |
#> Dim.3 ctr cos2
#> 1 -0.066 0.000 0.000 |
#> 2 -0.120 0.001 0.001 |
#> 3 -0.119 0.001 0.001 |
#> 4 -0.153 0.001 0.001 |
#> 5 -0.166 0.001 0.001 |
#> 6 -0.165 0.001 0.001 |
#> 7 -0.152 0.001 0.001 |
#> 8 -0.146 0.001 0.001 |
#> 9 -0.319 0.005 0.007 |
#> 10 -0.473 0.012 0.016 |
#>
#> Variables (the 10 first)
#> Dim.1 ctr cos2 Dim.2 ctr cos2 Dim.3 ctr
#> goal1 | 0.871 9.156 0.759 | 0.002 0.000 0.000 | 0.416 22.389
#> goal2 | 0.693 5.794 0.480 | 0.104 1.122 0.011 | -0.240 7.465
#> goal3 | 0.956 11.018 0.913 | 0.016 0.026 0.000 | 0.155 3.121
#> goal4 | 0.882 9.373 0.777 | 0.252 6.601 0.064 | 0.138 2.470
#> goal5 | 0.721 6.266 0.519 | 0.301 9.393 0.090 | -0.422 22.975
#> goal6 | 0.924 10.305 0.854 | 0.050 0.261 0.003 | 0.012 0.019
#> goal7 | 0.886 9.459 0.784 | 0.162 2.734 0.026 | 0.303 11.823
#> goal8 | 0.787 7.474 0.620 | -0.178 3.288 0.032 | -0.354 16.211
#> goal9 | 0.882 9.379 0.778 | -0.170 3.008 0.029 | -0.240 7.467
#> goal10 | 0.521 3.277 0.272 | -0.782 63.498 0.611 | 0.122 1.926
#> cos2
#> goal1 0.173 |
#> goal2 0.058 |
#> goal3 0.024 |
#> goal4 0.019 |
#> goal5 0.178 |
#> goal6 0.000 |
#> goal7 0.092 |
#> goal8 0.126 |
#> goal9 0.058 |
#> goal10 0.015 |
myPCA_g$eig
#> eigenvalue percentage of variance
#> comp 1 8.2905 69.087
#> comp 2 0.9621 8.018
#> comp 3 0.7743 6.453
#> comp 4 0.5854 4.878
#> comp 5 0.3612 3.010
#> comp 6 0.2724 2.270
#> comp 7 0.1862 1.552
#> comp 8 0.1530 1.275
#> comp 9 0.1472 1.227
#> comp 10 0.1110 0.925
#> comp 11 0.0938 0.782
#> comp 12 0.0629 0.524
#> cumulative percentage of variance
#> comp 1 69.1
#> comp 2 77.1
#> comp 3 83.6
#> comp 4 88.4
#> comp 5 91.4
#> comp 6 93.7
#> comp 7 95.3
#> comp 8 96.5
#> comp 9 97.8
#> comp 10 98.7
#> comp 11 99.5
#> comp 12 100.0Concerning the SDG goals, we conclude that most of our variables are going along the 1st component, except the goals 10 and 15 that are rather uncorrelated with the dimension 1. In addition, as seen before, the goals 12 and 13 are negatively correlated to the other goals. With a eigenvalue bigger than 1 for the first two components, we conclude that there are only 2 dimensions to take into account, according to the Kaiser-Guttman’s rule. Nevertheless, they are explaining less than 80% of cumulated variance.
Code
#for HFI scores
myPCA_s <- PCA(data_question1[,30:40])
summary(myPCA_s)
#>
#> Call:
#> PCA(X = data_question1[, 30:40])
#>
#>
#> Eigenvalues
#> Dim.1 Dim.2 Dim.3 Dim.4 Dim.5 Dim.6
#> Variance 5.783 1.486 1.065 0.710 0.553 0.468
#> % of var. 52.574 13.505 9.686 6.453 5.025 4.257
#> Cumulative % of var. 52.574 66.079 75.765 82.218 87.243 91.500
#> Dim.7 Dim.8 Dim.9 Dim.10 Dim.11
#> Variance 0.282 0.221 0.207 0.148 0.077
#> % of var. 2.565 2.006 1.880 1.346 0.702
#> Cumulative % of var. 94.065 96.071 97.951 99.298 100.000
#>
#> Individuals (the 10 first)
#> Dist Dim.1 ctr cos2 Dim.2 ctr cos2
#> 1 | 5.391 | -4.227 0.124 0.615 | 0.950 0.024 0.031
#> 2 | 4.570 | -3.928 0.107 0.739 | 0.541 0.008 0.014
#> 3 | 4.710 | -3.907 0.106 0.688 | 1.105 0.033 0.055
#> 4 | 3.347 | -3.202 0.071 0.915 | 0.125 0.000 0.001
#> 5 | 3.276 | -3.035 0.064 0.858 | 0.058 0.000 0.000
#> 6 | 5.772 | -4.243 0.125 0.540 | 0.843 0.019 0.021
#> 7 | 4.493 | -3.325 0.076 0.547 | -0.415 0.005 0.009
#> 8 | 4.436 | -3.362 0.078 0.574 | -0.472 0.006 0.011
#> 9 | 4.492 | -3.434 0.082 0.584 | -0.541 0.008 0.015
#> 10 | 3.855 | -3.333 0.077 0.747 | 0.805 0.017 0.044
#> Dim.3 ctr cos2
#> 1 | 1.060 0.042 0.039 |
#> 2 | 0.806 0.024 0.031 |
#> 3 | 1.234 0.057 0.069 |
#> 4 | 0.476 0.009 0.020 |
#> 5 | 0.509 0.010 0.024 |
#> 6 | -2.524 0.239 0.191 |
#> 7 | -2.603 0.254 0.336 |
#> 8 | -2.512 0.237 0.321 |
#> 9 | -2.733 0.281 0.370 |
#> 10 | -0.958 0.034 0.062 |
#>
#> Variables (the 10 first)
#> Dim.1 ctr cos2 Dim.2 ctr cos2 Dim.3
#> pf_security | 0.523 4.728 0.273 | -0.488 16.042 0.238 | -0.264
#> pf_movement | 0.845 12.348 0.714 | 0.247 4.095 0.061 | -0.141
#> pf_religion | 0.728 9.160 0.530 | 0.540 19.637 0.292 | -0.241
#> pf_assembly | 0.843 12.284 0.710 | 0.410 11.314 0.168 | -0.179
#> pf_expression | 0.884 13.502 0.781 | 0.188 2.388 0.035 | -0.248
#> pf_identity | 0.651 7.332 0.424 | -0.048 0.155 0.002 | 0.061
#> ef_government | -0.076 0.100 0.006 | 0.645 28.023 0.416 | 0.635
#> ef_legal | 0.837 12.110 0.700 | -0.346 8.053 0.120 | 0.000
#> ef_money | 0.697 8.408 0.486 | -0.248 4.138 0.061 | 0.446
#> ef_trade | 0.824 11.751 0.680 | -0.177 2.117 0.031 | 0.321
#> ctr cos2
#> pf_security 6.539 0.070 |
#> pf_movement 1.858 0.020 |
#> pf_religion 5.437 0.058 |
#> pf_assembly 3.021 0.032 |
#> pf_expression 5.768 0.061 |
#> pf_identity 0.352 0.004 |
#> ef_government 37.854 0.403 |
#> ef_legal 0.000 0.000 |
#> ef_money 18.704 0.199 |
#> ef_trade 9.693 0.103 |
myPCA_s$eig
#> eigenvalue percentage of variance
#> comp 1 5.7832 52.574
#> comp 2 1.4855 13.505
#> comp 3 1.0654 9.686
#> comp 4 0.7099 6.453
#> comp 5 0.5527 5.025
#> comp 6 0.4682 4.257
#> comp 7 0.2822 2.565
#> comp 8 0.2207 2.006
#> comp 9 0.2068 1.880
#> comp 10 0.1481 1.346
#> comp 11 0.0773 0.702
#> cumulative percentage of variance
#> comp 1 52.6
#> comp 2 66.1
#> comp 3 75.8
#> comp 4 82.2
#> comp 5 87.2
#> comp 6 91.5
#> comp 7 94.1
#> comp 8 96.1
#> comp 9 98.0
#> comp 10 99.3
#> comp 11 100.0Now concerning the Human Freedom Index scores, most of the variables are positively correlated to the dimension 1, slightly less for the PF religion and security, and finaly the EF government variable is uncorrelated to the dimension 1. With a eigenvalue bigger than 1 for the three first components, we conclude that there are 3 dimensions to take into account. Nevertheless, again, they are explaining less than 80% of cumulated variance.
Code
#### Kmean clustering ####
data1_scaled <- scale(Correlation_overall)
row.names(data1_scaled) <- data_question1[,1]
fviz_nbclust(data1_scaled, kmeans, method="wss")
kmean <- kmeans(data1_scaled, 7, nstart = 25)
print(kmean)
#> K-means clustering with 7 clusters of sizes 649, 328, 415, 417, 362, 286, 42
#>
#> Cluster means:
#> population overallscore goal1 goal2 goal3 goal4 goal5
#> 1 -0.1175 -1.36318 -1.449 -0.81472 -1.3936 -1.385 -0.8789
#> 2 -0.0541 0.17641 0.544 0.00762 0.2521 0.149 -0.3879
#> 3 -0.2228 0.90857 0.782 0.66143 0.8405 0.795 0.4841
#> 4 -0.0441 -0.00632 0.188 0.17283 0.1959 0.371 0.1977
#> 5 -0.0600 1.23573 0.801 0.88135 1.1750 0.849 1.1963
#> 6 -0.2757 0.07437 0.277 -0.56992 -0.1056 0.133 -0.0352
#> 7 7.2721 -0.38531 -0.247 0.56278 -0.0921 0.469 -0.2078
#> goal6 goal7 goal8 goal9 goal10 goal11 goal12 goal13 goal15
#> 1 -1.2223 -1.384 -0.8171 -0.981 -0.415 -1.345 0.957 0.7707 0.0747
#> 2 -0.1950 0.295 -0.3245 -0.115 0.383 0.195 0.279 -0.0591 -0.5520
#> 3 0.8673 0.800 0.7735 0.695 0.636 0.771 -0.716 -0.4075 0.6050
#> 4 0.1533 0.275 -0.0246 -0.394 -0.955 0.237 0.380 0.5083 -0.6663
#> 5 1.3303 0.855 1.4579 1.716 1.020 1.061 -1.788 -1.6815 0.4901
#> 6 -0.0731 0.183 -0.7134 -0.268 -0.202 0.131 0.150 0.2447 0.1412
#> 7 -0.6498 -0.172 0.0534 0.128 -0.800 -0.754 0.725 0.3586 -1.3909
#> goal16 goal17 unemployment.rate GDPpercapita
#> 1 -1.013 -0.88702 -0.4506 -0.614
#> 2 -0.101 0.25031 -0.0371 -0.316
#> 3 0.795 -0.00349 0.2777 0.252
#> 4 -0.517 -0.07079 -0.3422 -0.435
#> 5 1.524 0.85550 -0.3224 2.003
#> 6 0.187 0.91913 1.6081 -0.438
#> 7 -0.684 -1.14336 -0.2663 -0.497
#> MilitaryExpenditurePercentGDP internet_usage pf_law pf_security
#> 1 -0.131 -0.9411 -0.832 -0.4806
#> 2 0.879 -0.0108 -0.490 -0.0541
#> 3 0.108 0.6929 0.802 0.7241
#> 4 -0.517 -0.2850 -0.543 -0.7438
#> 5 -0.314 1.3701 1.602 0.9262
#> 6 0.226 -0.1110 0.141 0.0120
#> 7 0.403 -0.4433 -0.626 0.0151
#> pf_movement pf_religion pf_assembly pf_expression pf_identity
#> 1 -0.605 -0.165 -0.490 -0.5657 -0.932
#> 2 -1.148 -1.736 -1.515 -1.2681 -0.653
#> 3 0.658 0.546 0.703 0.7639 0.750
#> 4 0.316 0.427 0.325 0.0394 0.304
#> 5 0.908 0.748 0.918 1.3118 0.879
#> 6 0.326 0.309 0.397 0.0165 0.197
#> 7 -1.378 -2.071 -1.379 -0.7129 0.156
#> ef_government ef_legal ef_money ef_trade ef_regulation
#> 1 0.0485 -0.9672 -0.9082 -1.0115 -0.727
#> 2 -0.1659 -0.4117 -0.2798 -0.4079 -0.274
#> 3 -0.2337 0.6501 0.7269 0.8652 0.362
#> 4 0.9587 -0.3821 0.1523 0.1605 -0.238
#> 5 -0.7281 1.7424 0.9840 1.0330 1.105
#> 6 0.0255 0.0860 -0.0965 0.0753 0.495
#> 7 -0.5607 -0.0729 -0.2994 -0.7427 -0.731
#>
#> Clustering vector:
#> 1 2 3 4 5 6 7 8 9 10 11 12 13
#> 1 1 1 1 1 1 1 1 1 1 1 1 1
#> 14 15 16 17 18 19 20 21 22 23 24 25 26
#> 1 1 1 1 1 1 1 1 6 6 6 6 6
#> 27 28 29 30 31 32 33 34 35 36 37 38 39
#> 6 6 6 6 6 6 6 6 6 6 6 6 6
#> 40 41 42 43 44 45 46 47 48 49 50 51 52
#> 3 3 6 2 2 2 2 2 2 2 2 2 2
#> 53 54 55 56 57 58 59 60 61 62 63 64 65
#> 2 2 2 2 2 2 2 2 2 2 2 4 4
#> 66 67 68 69 70 71 72 73 74 75 76 77 78
#> 4 4 4 4 4 4 4 4 4 4 4 4 4
#> 79 80 81 82 83 84 85 86 87 88 89 90 91
#> 4 4 3 3 4 4 6 6 6 6 6 6 6
#> 92 93 94 95 96 97 98 99 100 101 102 103 104
#> 6 6 2 6 6 6 6 6 6 6 6 6 3
#> 105 106 107 108 109 110 111 112 113 114 115 116 117
#> 3 5 5 5 5 5 5 5 5 5 5 5 5
#> 118 119 120 121 122 123 124 125 126 127 128 129 130
#> 5 5 5 5 5 5 5 5 5 5 5 5 5
#> 131 132 133 134 135 136 137 138 139 140 141 142 143
#> 5 5 5 5 5 5 5 5 5 5 5 5 5
#> 144 145 146 147 148 149 150 151 152 153 154 155 156
#> 5 5 5 5 2 2 2 2 2 2 2 2 2
#> 157 158 159 160 161 162 163 164 165 166 167 168 169
#> 2 2 2 2 2 2 2 2 2 2 2 2 1
#> 170 171 172 173 174 175 176 177 178 179 180 181 182
#> 1 1 1 1 1 1 1 1 1 1 1 1 1
#> 183 184 185 186 187 188 189 190 191 192 193 194 195
#> 1 1 1 1 1 1 1 5 3 5 5 5 5
#> 196 197 198 199 200 201 202 203 204 205 206 207 208
#> 5 5 5 5 5 5 5 5 5 5 5 5 5
#> 209 210 211 212 213 214 215 216 217 218 219 220 221
#> 5 5 1 1 1 1 1 1 1 1 1 1 1
#> 222 223 224 225 226 227 228 229 230 231 232 233 234
#> 1 1 1 1 1 1 1 1 1 1 1 1 1
#> 235 236 237 238 239 240 241 242 243 244 245 246 247
#> 1 1 1 1 1 1 1 1 1 1 1 1 1
#> 248 249 250 251 252 253 254 255 256 257 258 259 260
#> 1 1 1 1 1 1 1 1 1 1 1 1 1
#> 261 262 263 264 265 266 267 268 269 270 271 272 273
#> 1 1 1 1 1 1 1 1 1 1 2 2 2
#> 274 275 276 277 278 279 280 281 282 283 284 285 286
#> 6 6 6 6 6 6 3 3 3 3 3 3 3
#> 287 288 289 290 291 292 293 294 295 296 297 298 299
#> 3 3 3 3 3 3 3 3 6 6 6 6 6
#> 300 301 302 303 304 305 306 307 308 309 310 311 312
#> 6 6 6 6 6 6 6 6 6 6 6 6 6
#> 313 314 315 316 317 318 319 320 321 322 323 324 325
#> 6 6 6 4 4 4 4 4 4 4 4 4 4
#> 326 327 328 329 330 331 332 333 334 335 336 337 338
#> 4 4 4 4 4 4 4 4 4 4 4 4 4
#> 339 340 341 342 343 344 345 346 347 348 349 350 351
#> 4 4 4 4 4 4 4 4 4 4 4 4 4
#> 352 353 354 355 356 357 358 359 360 361 362 363 364
#> 4 4 4 4 4 4 6 6 6 6 6 6 6
#> 365 366 367 368 369 370 371 372 373 374 375 376 377
#> 6 6 6 6 6 6 6 6 6 6 6 6 6
#> 378 379 380 381 382 383 384 385 386 387 388 389 390
#> 6 1 1 1 1 1 1 1 1 1 1 1 1
#> 391 392 393 394 395 396 397 398 399 400 401 402 403
#> 1 1 1 1 1 1 1 1 1 5 5 5 5
#> 404 405 406 407 408 409 410 411 412 413 414 415 416
#> 5 5 5 5 5 5 5 5 5 5 5 5 5
#> 417 418 419 420 421 422 423 424 425 426 427 428 429
#> 5 5 5 5 5 5 5 5 5 5 5 5 5
#> 430 431 432 433 434 435 436 437 438 439 440 441 442
#> 5 5 5 5 5 5 5 5 5 5 5 5 3
#> 443 444 445 446 447 448 449 450 451 452 453 454 455
#> 3 3 3 3 3 3 3 3 3 3 3 3 3
#> 456 457 458 459 460 461 462 463 464 465 466 467 468
#> 3 3 3 3 3 3 3 7 7 7 7 7 7
#> 469 470 471 472 473 474 475 476 477 478 479 480 481
#> 7 7 7 7 7 7 7 7 7 7 7 7 7
#> 482 483 484 485 486 487 488 489 490 491 492 493 494
#> 7 7 1 1 1 1 1 1 1 1 1 1 1
#> 495 496 497 498 499 500 501 502 503 504 505 506 507
#> 1 1 1 1 1 1 1 1 1 1 1 1 1
#> 508 509 510 511 512 513 514 515 516 517 518 519 520
#> 1 1 1 1 1 1 1 1 1 1 1 1 1
#> 521 522 523 524 525 526 527 528 529 530 531 532 533
#> 1 1 1 1 1 1 1 1 1 1 1 1 1
#> 534 535 536 537 538 539 540 541 542 543 544 545 546
#> 1 1 1 1 1 1 1 1 1 1 1 1 1
#> 547 548 549 550 551 552 553 554 555 556 557 558 559
#> 4 4 4 4 4 4 4 4 4 4 4 4 4
#> 560 561 562 563 564 565 566 567 568 569 570 571 572
#> 4 4 4 4 4 4 4 4 3 3 3 3 3
#> 573 574 575 576 577 578 579 580 581 582 583 584 585
#> 3 3 3 3 3 3 3 3 3 3 3 3 3
#> 586 587 588 589 590 591 592 593 594 595 596 597 598
#> 3 3 3 3 3 3 3 3 3 3 3 3 3
#> 599 600 601 602 603 604 605 606 607 608 609 610 611
#> 3 3 3 3 3 3 3 3 5 5 3 5 5
#> 612 613 614 615 616 617 618 619 620 621 622 623 624
#> 5 5 5 5 5 5 5 5 5 5 5 5 5
#> 625 626 627 628 629 630 631 632 633 634 635 636 637
#> 5 5 5 5 5 5 5 5 5 5 5 5 5
#> 638 639 640 641 642 643 644 645 646 647 648 649 650
#> 5 5 5 5 5 5 5 5 5 5 5 5 5
#> 651 652 653 654 655 656 657 658 659 660 661 662 663
#> 5 4 4 4 4 4 4 4 4 4 4 4 4
#> 664 665 666 667 668 669 670 671 672 673 674 675 676
#> 4 4 4 4 4 4 4 4 4 2 2 2 2
#> 677 678 679 680 681 682 683 684 685 686 687 688 689
#> 2 2 2 2 2 2 2 2 2 2 2 2 2
#> 690 691 692 693 694 695 696 697 698 699 700 701 702
#> 2 2 2 2 4 4 4 4 4 4 4 4 4
#> 703 704 705 706 707 708 709 710 711 712 713 714 715
#> 4 4 4 4 4 4 4 4 4 4 4 4 2
#> 716 717 718 719 720 721 722 723 724 725 726 727 728
#> 2 2 2 2 2 2 2 2 2 2 2 2 2
#> 729 730 731 732 733 734 735 736 737 738 739 740 741
#> 2 2 2 2 2 2 2 3 3 3 3 3 3
#> 742 743 744 745 746 747 748 749 750 751 752 753 754
#> 3 3 3 3 3 3 3 3 3 3 3 3 3
#> 755 756 757 758 759 760 761 762 763 764 765 766 767
#> 3 3 3 3 3 3 3 3 3 3 3 3 3
#> 768 769 770 771 772 773 774 775 776 777 778 779 780
#> 3 3 3 5 5 5 5 5 5 5 1 1 1
#> 781 782 783 784 785 786 787 788 789 790 791 792 793
#> 1 1 1 1 1 1 1 1 1 1 1 1 1
#> 794 795 796 797 798 799 800 801 802 803 804 805 806
#> 1 1 1 1 1 5 5 5 5 5 5 5 5
#> 807 808 809 810 811 812 813 814 815 816 817 818 819
#> 5 5 5 5 5 5 5 5 5 5 5 5 5
#> 820 821 822 823 824 825 826 827 828 829 830 831 832
#> 6 6 6 6 6 6 6 2 2 2 2 2 2
#> 833 834 835 836 837 838 839 840 841 842 843 844 845
#> 2 6 6 6 6 6 6 6 3 3 3 3 5
#> 846 847 848 849 850 851 852 853 854 855 856 857 858
#> 5 5 5 5 5 5 5 5 5 5 5 5 5
#> 859 860 861 862 863 864 865 866 867 868 869 870 871
#> 5 5 5 1 1 1 1 1 1 1 1 6 6
#> 872 873 874 875 876 877 878 879 880 881 882 883 884
#> 6 6 6 6 6 6 6 6 6 6 6 5 5
#> 885 886 887 888 889 890 891 892 893 894 895 896 897
#> 5 5 5 5 5 5 5 5 5 5 5 5 5
#> 898 899 900 901 902 903 904 905 906 907 908 909 910
#> 5 5 5 5 5 5 4 4 4 4 6 6 6
#> 911 912 913 914 915 916 917 918 919 920 921 922 923
#> 6 6 6 6 6 6 6 6 6 6 6 3 3
#> 924 925 926 927 928 929 930 931 932 933 934 935 936
#> 3 1 1 1 1 1 1 1 4 4 4 4 4
#> 937 938 939 940 941 942 943 944 945 946 947 948 949
#> 4 4 4 4 4 4 4 4 4 3 3 3 3
#> 950 951 952 953 954 955 956 957 958 959 960 961 962
#> 3 3 3 3 3 3 3 3 3 3 3 3 3
#> 963 964 965 966 967 968 969 970 971 972 973 974 975
#> 3 3 3 3 4 4 4 4 4 4 4 4 4
#> 976 977 978 979 980 981 982 983 984 985 986 987 988
#> 4 4 4 4 4 4 4 4 4 4 4 4 4
#> 989 990 991 992 993 994 995 996 997 998 999 1000 1001
#> 4 4 4 4 4 4 4 4 4 4 4 4 4
#> 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014
#> 4 4 4 4 4 4 4 3 3 3 3 3 3
#> 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027
#> 3 3 3 3 3 3 3 3 3 3 3 3 3
#> 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040
#> 3 3 3 3 3 3 3 3 3 3 3 3 3
#> 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053
#> 3 3 3 3 3 3 3 3 3 3 4 4 4
#> 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066
#> 4 4 4 4 4 4 4 4 4 4 4 4 4
#> 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079
#> 4 4 4 4 2 7 7 7 7 7 7 7 7
#> 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092
#> 7 7 7 7 7 7 7 7 7 7 7 7 7
#> 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105
#> 5 5 5 5 5 5 5 5 5 5 5 5 5
#> 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118
#> 5 5 5 5 5 5 5 5 2 2 2 2 2
#> 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131
#> 2 2 2 2 2 2 2 2 2 2 2 2 2
#> 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144
#> 2 2 2 3 3 3 3 3 3 3 3 3 3
#> 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157
#> 3 3 3 3 3 3 3 3 3 3 3 3 3
#> 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170
#> 3 3 3 3 3 3 3 3 3 3 3 3 3
#> 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183
#> 3 3 3 3 3 3 4 4 4 4 4 4 4
#> 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196
#> 4 4 4 4 4 4 4 4 4 4 4 4 4
#> 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209
#> 4 2 2 2 2 2 2 2 2 2 2 2 2
#> 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222
#> 2 2 2 2 2 2 2 2 2 5 3 5 5
#> 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235
#> 5 5 5 5 5 5 5 5 5 5 5 5 5
#> 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248
#> 5 5 5 5 4 2 2 2 2 2 2 2 2
#> 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261
#> 2 2 2 2 2 2 2 2 2 2 2 2 1
#> 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274
#> 1 1 1 1 1 1 1 1 1 1 1 1 1
#> 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287
#> 4 4 1 4 4 4 4 2 2 2 2 2 2
#> 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300
#> 2 2 2 2 2 2 2 2 2 2 2 2 4
#> 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313
#> 2 2 3 3 3 3 3 3 3 3 3 3 3
#> 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326
#> 3 3 3 3 3 3 3 3 3 3 2 2 2
#> 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339
#> 4 2 2 2 2 2 2 2 2 2 2 2 4
#> 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352
#> 4 4 4 4 4 1 1 1 1 1 1 1 1
#> 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365
#> 1 1 1 1 1 6 6 6 6 6 6 6 6
#> 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378
#> 3 3 3 3 3 3 3 3 3 3 3 3 3
#> 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391
#> 3 3 3 3 3 3 3 3 5 5 5 5 5
#> 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404
#> 5 5 5 5 5 5 5 5 5 5 5 5 5
#> 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417
#> 5 5 5 3 3 3 3 3 3 3 3 3 3
#> 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430
#> 3 3 3 3 3 3 3 3 3 3 3 2 2
#> 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443
#> 2 2 2 2 2 2 2 2 2 2 2 2 2
#> 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456
#> 2 2 2 2 2 2 4 4 4 4 6 6 6
#> 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469
#> 6 6 6 6 6 6 6 6 6 3 3 3 3
#> 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482
#> 3 1 1 1 1 1 1 1 1 1 1 1 1
#> 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495
#> 1 1 1 1 1 1 1 1 1 4 4 4 4
#> 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508
#> 4 4 4 4 4 4 4 4 4 4 4 4 4
#> 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521
#> 4 4 4 4 6 6 6 6 6 6 6 6 6
#> 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534
#> 6 6 6 6 6 6 6 6 6 6 6 6 1
#> 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547
#> 1 1 1 1 1 1 1 1 1 1 1 1 1
#> 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560
#> 1 1 1 1 1 1 1 1 1 1 1 1 1
#> 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573
#> 1 1 1 1 1 1 1 1 2 2 2 2 2
#> 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586
#> 2 2 6 6 6 6 6 6 6 6 6 6 6
#> 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599
#> 6 6 6 6 6 6 6 6 6 6 6 6 6
#> 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612
#> 6 6 6 6 6 6 6 6 6 6 6 6 6
#> 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625
#> 6 6 6 6 6 1 1 1 1 1 1 1 1
#> 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638
#> 1 1 1 1 1 1 1 1 1 1 1 1 1
#> 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651
#> 1 1 1 1 1 1 1 1 1 1 1 1 1
#> 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664
#> 1 1 1 1 1 1 1 1 4 4 4 4 4
#> 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677
#> 4 4 4 4 4 4 4 4 4 4 4 4 4
#> 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690
#> 4 4 4 1 1 1 1 1 1 1 1 1 1
#> 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703
#> 1 1 1 1 1 1 1 1 1 1 1 2 2
#> 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716
#> 2 2 2 2 2 2 2 2 2 2 2 2 2
#> 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729
#> 2 2 2 4 4 2 6 6 6 6 6 6 6
#> 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742
#> 6 6 6 6 6 6 6 6 6 6 6 6 6
#> 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755
#> 6 1 1 1 1 1 1 1 1 1 1 1 1
#> 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768
#> 1 1 1 1 1 1 1 1 1 1 1 1 1
#> 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781
#> 1 1 1 1 1 1 1 1 1 1 1 1 1
#> 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794
#> 1 1 1 1 4 4 4 4 4 4 4 4 4
#> 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807
#> 4 4 4 4 4 4 4 4 4 2 2 2 5
#> 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820
#> 5 5 5 5 5 5 5 5 5 5 5 5 5
#> 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833
#> 5 5 5 5 5 5 5 5 5 5 5 5 5
#> 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846
#> 5 5 5 5 5 5 5 5 5 5 5 5 5
#> 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859
#> 5 5 1 1 1 1 1 1 1 1 1 1 1
#> 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872
#> 1 1 1 4 4 4 6 6 4 6 1 1 1
#> 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885
#> 1 1 1 1 1 1 1 1 1 1 1 1 1
#> 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898
#> 1 1 1 1 1 4 4 4 4 4 4 4 4
#> 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911
#> 4 4 4 4 4 4 4 4 4 4 4 4 4
#> 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924
#> 4 4 4 4 4 4 4 4 4 4 4 4 4
#> 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937
#> 4 4 4 4 4 4 4 4 1 1 1 1 1
#> 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950
#> 1 1 1 1 1 1 1 1 1 1 1 1 1
#> 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963
#> 1 1 1 3 3 3 3 3 3 3 3 3 3
#> 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976
#> 3 3 3 3 3 3 3 3 3 3 3 3 3
#> 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989
#> 3 3 3 3 3 3 3 3 3 3 3 3 3
#> 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002
#> 3 3 3 3 3 3 4 4 4 4 4 4 4
#> 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015
#> 4 4 4 4 4 4 4 4 4 4 4 4 4
#> 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028
#> 4 4 4 4 4 4 4 4 3 3 3 3 3
#> 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041
#> 3 3 3 3 3 3 3 3 3 2 2 2 2
#> 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054
#> 2 2 2 2 2 2 2 2 2 2 2 2 2
#> 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067
#> 2 2 2 2 1 1 1 1 1 1 1 1 1
#> 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080
#> 1 1 1 1 1 1 2 2 2 2 2 2 1
#> 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093
#> 1 1 1 1 1 1 1 1 1 1 1 1 1
#> 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106
#> 1 1 1 1 1 4 1 1 1 1 1 1 1
#> 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119
#> 1 1 1 1 1 1 1 1 1 1 1 1 1
#> 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132
#> 1 1 4 4 4 4 4 4 4 4 4 4 4
#> 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145
#> 4 4 4 4 4 4 4 4 4 4 6 6 6
#> 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158
#> 6 6 6 6 6 6 6 6 6 6 6 6 6
#> 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171
#> 6 3 3 3 3 3 3 3 3 3 3 3 3
#> 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184
#> 3 3 3 3 3 3 3 3 3 3 3 3 3
#> 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197
#> 3 3 3 3 3 3 3 3 3 3 3 3 3
#> 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210
#> 3 3 3 3 3 5 5 3 5 5 5 5 5
#> 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223
#> 5 5 5 5 5 5 5 5 5 5 5 5 5
#> 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236
#> 5 5 5 1 1 1 1 1 1 1 1 1 1
#> 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249
#> 1 1 1 1 1 1 1 1 1 1 1 1 1
#> 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262
#> 1 1 1 1 1 1 1 1 1 1 1 1 1
#> 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275
#> 1 1 1 1 1 1 4 4 4 4 4 4 4
#> 2276 2277 2278 2279 2280 2281 2282 2283 2284 2285 2286 2287 2288
#> 4 4 4 4 4 4 4 2 2 2 2 2 4
#> 2289 2290 2291 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301
#> 2 2 2 2 2 2 2 2 2 2 2 2 6
#> 2302 2303 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314
#> 6 6 6 6 6 6 6 6 2 2 2 2 2
#> 2315 2316 2317 2318 2319 2320 2321 2322 2323 2324 2325 2326 2327
#> 2 2 2 2 2 6 2 2 2 2 2 2 2
#> 2328 2329 2330 2331 2332 2333 2334 2335 2336 2337 2338 2339 2340
#> 2 2 2 2 1 1 1 1 1 1 1 1 1
#> 2341 2342 2343 2344 2345 2346 2347 2348 2349 2350 2351 2352 2353
#> 1 1 1 1 1 1 1 1 1 1 1 1 1
#> 2354 2355 2356 2357 2358 2359 2360 2361 2362 2363 2364 2365 2366
#> 1 1 1 1 1 1 1 1 1 1 1 1 1
#> 2367 2368 2369 2370 2371 2372 2373 2374 2375 2376 2377 2378 2379
#> 1 1 1 1 1 1 1 6 6 6 6 6 3
#> 2380 2381 2382 2383 2384 2385 2386 2387 2388 2389 2390 2391 2392
#> 3 3 3 3 3 3 3 3 3 3 3 3 3
#> 2393 2394 2395 2396 2397 2398 2399 2400 2401 2402 2403 2404 2405
#> 3 3 5 5 5 5 5 5 5 5 5 5 5
#> 2406 2407 2408 2409 2410 2411 2412 2413 2414 2415 2416 2417 2418
#> 5 5 5 5 5 5 5 5 5 5 2 2 2
#> 2419 2420 2421 2422 2423 2424 2425 2426 2427 2428 2429 2430 2431
#> 2 2 2 2 2 2 2 2 2 2 2 2 2
#> 2432 2433 2434 2435 2436 2437 2438 2439 2440 2441 2442 2443 2444
#> 2 2 2 2 2 6 6 6 6 6 6 6 6
#> 2445 2446 2447 2448 2449 2450 2451 2452 2453 2454 2455 2456 2457
#> 6 6 6 6 6 6 6 6 6 6 6 6 6
#> 2458 2459 2460 2461 2462 2463 2464 2465 2466 2467 2468 2469 2470
#> 1 1 1 1 1 1 1 1 1 1 1 1 1
#> 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483
#> 1 1 1 1 1 1 1 1 1 1 1 1 1
#> 2484 2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496
#> 1 1 1 1 1 1 1 1 1 1 1 1 1
#> 2497 2498 2499
#> 1 1 1
#>
#> Within cluster sum of squares by cluster:
#> [1] 10371 5910 4023 4976 2844 4594 750
#> (between_SS / total_SS = 60.6 %)
#>
#> Available components:
#>
#> [1] "cluster" "centers" "totss" "withinss"
#> [5] "tot.withinss" "betweenss" "size" "iter"
#> [9] "ifault"
fviz_cluster(kmean, data=data1_scaled, repel=TRUE, depth =NULL, ellipse.type = "norm")Due to the large number of data, the visualization of the clusters using the kmean method is not really relevant. In addition, by clustering our data, we are trying to get group that differ from eachother but with little variation of the observations within the same cluster. Here, only 60.6% of the variance is explained by the variation between clusters. This is not enough.
3.3 Focus on the influence of events over the SDG scores
In order to have an overview of the relationship between the different events variables and the SDG overall score, we make several graphs containing the Pearson correlation coefficient between the variable, the scatter plots describing the relationship between the variables, as well as the distribution of each variable.
Code
pairs(data_question3_2[,c("overallscore", "cases_per_million", "deaths_per_million", "stringency")], upper.panel=panel.cor, diag.panel=panel.hist, main="Correlation table and distribution of COVID variables")The different variables used to materialize the impact of COVID19 do not seem to have important influence on the overall score, but we will further explore for the different SDGs, since we believe that COVID19 had a specific influence on some SDGs, for instance “good health and well-being” or “decent work and economic growth”.
Code
pairs(data_question3_3[,c("overallscore", "ongoing", "sum_deaths", "pop_affected", "area_affected", "maxintensity")], upper.panel=panel.cor, diag.panel=panel.hist, main="Correlation table and distribution of conflicts variables")The different variables used to materialize the impact of conflicts do not seem to have important influence on the overall score, but we will further explore for the different SDGs, since we believe that conflicts have a specific influence on some SDGs.
To explore our data on events such as disasters, covid-19 and conflicts we have to first see which countries are the most touched by these. To do so, we made time-series analysis on this three events each time depending on different variables.
Code
# Converted 'year' column to date format
Q3.1$year <- as.Date(as.character(Q3.1$year), format = "%Y")
Q3.2$year <- as.Date(as.character(Q3.2$year), format = "%Y")
Q3.3$year <- as.Date(as.character(Q3.3$year), format = "%Y")These is our time-analysis concerning the COVID-19 cases per million by region between end 2018 and 2022.
Code
covid_filtered <- Q3.2[Q3.2$year >= as.Date("2018-12-12"), ]
ggplot(data = covid_filtered, aes(x = year, y = cases_per_million, group = region, color = region)) +
geom_smooth(method = "loess", se = FALSE, span = 0.8, size = 0.5) +
labs(title = "Trend of COVID-19 Cases per Million Over Time",
x = "Year", y = "Cases per Million") +
facet_wrap(~ region, nrow = 4) +
theme( axis.text.x = element_text(angle = 45, size = 8, hjust = 1),
axis.text.y = element_text(vjust = 1, size = 8, hjust = 1),
plot.title = element_text(margin = margin(b = 20), hjust = 0.5,
vjust = 8, lineheight = 2),
strip.text = element_blank(),
panel.spacing = unit(0.5, "lines")
) +
theme(legend.position = "right") +
guides(color = guide_legend(ncol = 1))These is our time-analysis concerning the COVID-19 deaths per million per region between end 2018 and 2022
Code
ggplot(data = covid_filtered, aes(x = year, y = deaths_per_million, group = region, color = region)) +
geom_smooth(method = "loess", se = FALSE, span = 0.8, size = 0.5) +
labs(title = "Trend of COVID-19 Deaths per Million Over Time", x = "Year", y = "Deaths per Million") +
facet_wrap(~ region, nrow = 4) +
theme( axis.text.x = element_text(angle = 45, size = 8, hjust = 1),
axis.text.y = element_text(vjust = 1, size = 8, hjust = 1),
plot.title = element_text(margin = margin(b = 20), hjust = 0.5,
vjust = 8, lineheight = 2),
strip.text = element_blank(),
panel.spacing = unit(0.5, "lines")
) +
theme(legend.position = "right") +
guides(color = guide_legend(ncol = 1))These is our time-analysis concerning the COVID-19 stringency per region between end 2018 and 2022
Code
ggplot(data = covid_filtered, aes(x = year, y = stringency, group = region, color = region)) +
geom_smooth(method = "loess", se = FALSE, span = 0.7, size = 0.5) +
labs(title = "Trend of COVID-19 Stringency Over Time", x = "Year", y = "Stringency") +
facet_wrap(~ region, nrow = 2) +
theme_minimal() +
theme(legend.position = "right") +
guides(color = guide_legend(nrow = 4))These is our time-analysis concerning climatic disasters with total affected per region
Code
Q3.1[is.na(Q3.1)] <- 0
ggplot(data = Q3.1, aes(x = year, y = total_affected, group = region, color = region)) +
geom_smooth(method = "loess", se = FALSE, span = 0.7, size = 0.5) +
labs(title = "Trend of Total Affected from Climatic Disasters Over Time", x = "Year", y = "Total Affected") +
facet_wrap(~ region, nrow = 4) +
theme( axis.text.x = element_text(angle = 45, size = 8, hjust = 1),
axis.text.y = element_text(vjust = 1, size = 8, hjust = 1),
plot.title = element_text(margin = margin(b = 20), hjust = 0.5,
vjust = 8, lineheight = 2),
strip.text = element_blank(),
panel.spacing = unit(0.5, "lines")
) +
theme(legend.position = "right") +
guides(color = guide_legend(ncol = 1))These is our time-analysis concerning conflicts deaths per region between 2000 and 2016
Code
conflicts_filtered <- Q3.3[Q3.3$year >= as.Date("2000-01-01") & Q3.3$year <= as.Date("2016-12-31"), ]
ggplot(data = conflicts_filtered, aes(x = year, y = sum_deaths, group = region, color = region)) +
geom_smooth(method = "loess", se = FALSE, span = 0.3, size = 0.5) + # Using loess smoothing method
labs(title = "Trend of Deaths by Conflicts Over Time", x = "Year", y = "Sum Deaths") +
facet_wrap(~ region, nrow = 4) +
theme( axis.text.x = element_text(angle = 45, size = 8, hjust = 1),
axis.text.y = element_text(vjust = 1, size = 8, hjust = 1),
plot.title = element_text(margin = margin(b = 20), hjust = 0.5,
vjust = 8, lineheight = 2),
strip.text = element_blank(),
panel.spacing = unit(0.5, "lines")
) +
theme(legend.position = "right") +
guides(color = guide_legend(ncol = 1))We can see that the regions’ the most affected by the conflicts are : Middle east and north Africa, Sub-Saharan Africa, South Asia, then less America & the Caribbean and Eastern Europe
These is our time-analysis concerning conflicts affected population per region between 2000 and 2016
Code
ggplot(data = conflicts_filtered, aes(x = year, y = pop_affected, group = region, color = region)) +
geom_smooth(method = "loess", se = FALSE, span = 0.3, size = 0.5) + # Using loess smoothing method
labs(title = "Trend of Population Affected by Conflicts Over Time", x = "Year", y = "pop_affected") +
facet_wrap(~ region, nrow = 4) +
theme( axis.text.x = element_text(angle = 45, size = 8, hjust = 1),
axis.text.y = element_text(vjust = 1, size = 8, hjust = 1),
plot.title = element_text(margin = margin(b = 20), hjust = 0.5,
vjust = 8, lineheight = 2),
strip.text = element_blank(),
panel.spacing = unit(0.5, "lines")
) +
theme(legend.position = "right") +
guides(color = guide_legend(ncol = 1))We can see that the regions’ the most affected by the conflicts are : Middle east and north Africa, Sub-Saharan Africa, South Asia, America & the Caribbean, Eastern Europe ans sometimes Caucasus and Central Asia
Now that we could visualize which regions are the most impacted by these three events we can do correlations analysis per region to see if this events have indeed an impact on the evolution of SDG goals.
Here we want to analyse the correlation between the climate disasters and the SDG goals in South and East Asia.
Code
Q3.1[is.na(Q3.1)] <- 0
south_east_asia_data <- Q3.1[Q3.1$region %in% c("South Asia", "East Asia"), ]
relevant_columns <- c("goal1", "goal2", "goal3", "goal4", "goal5", "goal6", "goal7", "goal8", "goal9", "goal10", "goal11", "goal12", "goal13", "goal15", "goal16", "total_affected", "no_homeless")
correlation_matrix_disaster_Asia <- cor(south_east_asia_data[, relevant_columns], use = "complete.obs")
kable(correlation_matrix_disaster_Asia)| goal1 | goal2 | goal3 | goal4 | goal5 | goal6 | goal7 | goal8 | goal9 | goal10 | goal11 | goal12 | goal13 | goal15 | goal16 | total_affected | no_homeless | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| goal1 | 1.000 | -0.026 | 0.322 | 0.394 | 0.186 | 0.358 | 0.402 | 0.537 | 0.203 | 0.577 | 0.170 | -0.035 | -0.073 | 0.450 | 0.125 | -0.040 | -0.050 |
| goal2 | -0.026 | 1.000 | 0.647 | 0.505 | 0.573 | 0.547 | 0.512 | 0.548 | 0.679 | -0.205 | 0.520 | -0.302 | -0.321 | -0.280 | 0.474 | 0.099 | -0.076 |
| goal3 | 0.322 | 0.647 | 1.000 | 0.789 | 0.588 | 0.703 | 0.826 | 0.806 | 0.864 | -0.170 | 0.804 | -0.747 | -0.725 | -0.212 | 0.719 | -0.017 | -0.105 |
| goal4 | 0.394 | 0.505 | 0.789 | 1.000 | 0.605 | 0.497 | 0.630 | 0.610 | 0.656 | -0.080 | 0.455 | -0.580 | -0.604 | -0.103 | 0.373 | 0.093 | -0.014 |
| goal5 | 0.186 | 0.573 | 0.588 | 0.605 | 1.000 | 0.563 | 0.451 | 0.453 | 0.427 | -0.100 | 0.529 | -0.404 | -0.450 | -0.205 | 0.347 | 0.055 | -0.152 |
| goal6 | 0.358 | 0.547 | 0.703 | 0.497 | 0.563 | 1.000 | 0.667 | 0.625 | 0.693 | -0.006 | 0.655 | -0.578 | -0.542 | -0.135 | 0.582 | -0.128 | -0.207 |
| goal7 | 0.402 | 0.512 | 0.826 | 0.630 | 0.451 | 0.667 | 1.000 | 0.702 | 0.760 | -0.084 | 0.809 | -0.536 | -0.487 | -0.208 | 0.548 | -0.024 | -0.060 |
| goal8 | 0.537 | 0.548 | 0.806 | 0.610 | 0.453 | 0.625 | 0.702 | 1.000 | 0.741 | 0.189 | 0.642 | -0.576 | -0.563 | -0.033 | 0.639 | -0.012 | -0.090 |
| goal9 | 0.203 | 0.679 | 0.864 | 0.656 | 0.427 | 0.693 | 0.760 | 0.741 | 1.000 | -0.115 | 0.671 | -0.733 | -0.730 | -0.220 | 0.660 | 0.011 | -0.067 |
| goal10 | 0.577 | -0.205 | -0.170 | -0.080 | -0.100 | -0.006 | -0.084 | 0.189 | -0.115 | 1.000 | -0.306 | 0.182 | 0.158 | 0.608 | -0.033 | -0.150 | -0.038 |
| goal11 | 0.170 | 0.520 | 0.804 | 0.455 | 0.529 | 0.655 | 0.809 | 0.642 | 0.671 | -0.306 | 1.000 | -0.631 | -0.557 | -0.354 | 0.695 | -0.123 | -0.154 |
| goal12 | -0.035 | -0.302 | -0.747 | -0.580 | -0.404 | -0.578 | -0.536 | -0.576 | -0.733 | 0.182 | -0.631 | 1.000 | 0.959 | 0.139 | -0.732 | 0.112 | 0.116 |
| goal13 | -0.073 | -0.321 | -0.725 | -0.604 | -0.450 | -0.542 | -0.487 | -0.563 | -0.730 | 0.158 | -0.557 | 0.959 | 1.000 | 0.069 | -0.671 | 0.055 | 0.096 |
| goal15 | 0.450 | -0.280 | -0.212 | -0.103 | -0.205 | -0.135 | -0.208 | -0.033 | -0.220 | 0.608 | -0.354 | 0.139 | 0.069 | 1.000 | 0.022 | -0.071 | -0.022 |
| goal16 | 0.125 | 0.474 | 0.719 | 0.373 | 0.347 | 0.582 | 0.548 | 0.639 | 0.660 | -0.033 | 0.695 | -0.732 | -0.671 | 0.022 | 1.000 | -0.146 | -0.130 |
| total_affected | -0.040 | 0.099 | -0.017 | 0.093 | 0.055 | -0.128 | -0.024 | -0.012 | 0.011 | -0.150 | -0.123 | 0.112 | 0.055 | -0.071 | -0.146 | 1.000 | 0.147 |
| no_homeless | -0.050 | -0.076 | -0.105 | -0.014 | -0.152 | -0.207 | -0.060 | -0.090 | -0.067 | -0.038 | -0.154 | 0.116 | 0.096 | -0.022 | -0.130 | 0.147 | 1.000 |
Code
cor_melted <- as.data.frame(as.table(correlation_matrix_disaster_Asia))
names(cor_melted) <- c("Variable1", "Variable2", "Correlation")
ggplot(data = cor_melted, aes(Variable1, Variable2, fill = Correlation)) +
geom_tile() +
scale_fill_gradient2(low = "blue", high = "red", mid = "white",
midpoint = 0, limit = c(-1, 1), space = "Lab",
name = "Correlation") +
theme_minimal() +
theme(axis.text.x = element_text(angle = 45, vjust = 1, size = 8, hjust = 1),
axis.text.y = element_text(size = 8)) +
coord_fixed() +
labs(x = '', y = '',
title = 'Correlation between the climate disasters and the SDG goals in South and East Asia')We conclude that climate disasters do not really have a big impact on SDG goals.
Here we want to analyse the correlation between the Covid-19 and the SDG goals only during Covid time.
Code
covid_filtered <- Q3.2[Q3.2$year >= as.Date("2019-01-01"), ]
relevant_columns <- c("goal1", "goal2", "goal3", "goal4", "goal5", "goal6", "goal7", "goal8", "goal9", "goal10", "goal11", "goal12", "goal13", "goal15", "goal16", "stringency", "cases_per_million", "deaths_per_million")
# Subset data with relevant columns for correlation analysis
relevant_data <- covid_filtered[, relevant_columns]
correlation_matrix_Covid <- cor(relevant_data, use = "complete.obs")
kable(correlation_matrix_Covid)| goal1 | goal2 | goal3 | goal4 | goal5 | goal6 | goal7 | goal8 | goal9 | goal10 | goal11 | goal12 | goal13 | goal15 | goal16 | stringency | cases_per_million | deaths_per_million | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| goal1 | 1.000 | 0.534 | 0.867 | 0.777 | 0.445 | 0.763 | 0.798 | 0.584 | 0.781 | 0.497 | 0.727 | -0.648 | -0.553 | 0.099 | 0.714 | 0.056 | 0.341 | 0.361 |
| goal2 | 0.534 | 1.000 | 0.560 | 0.541 | 0.469 | 0.605 | 0.469 | 0.636 | 0.569 | 0.240 | 0.463 | -0.353 | -0.284 | 0.122 | 0.451 | 0.088 | 0.206 | 0.242 |
| goal3 | 0.867 | 0.560 | 1.000 | 0.829 | 0.641 | 0.836 | 0.845 | 0.693 | 0.881 | 0.456 | 0.828 | -0.789 | -0.669 | 0.152 | 0.825 | 0.040 | 0.412 | 0.373 |
| goal4 | 0.777 | 0.541 | 0.829 | 1.000 | 0.656 | 0.764 | 0.803 | 0.596 | 0.773 | 0.309 | 0.758 | -0.655 | -0.558 | 0.058 | 0.674 | 0.113 | 0.349 | 0.339 |
| goal5 | 0.445 | 0.469 | 0.641 | 0.656 | 1.000 | 0.663 | 0.606 | 0.587 | 0.645 | 0.098 | 0.690 | -0.653 | -0.564 | 0.203 | 0.628 | 0.060 | 0.330 | 0.261 |
| goal6 | 0.763 | 0.605 | 0.836 | 0.764 | 0.663 | 1.000 | 0.765 | 0.711 | 0.811 | 0.366 | 0.766 | -0.727 | -0.583 | 0.262 | 0.729 | 0.069 | 0.389 | 0.398 |
| goal7 | 0.798 | 0.469 | 0.845 | 0.803 | 0.606 | 0.765 | 1.000 | 0.556 | 0.740 | 0.323 | 0.793 | -0.654 | -0.494 | 0.123 | 0.697 | 0.055 | 0.340 | 0.374 |
| goal8 | 0.584 | 0.636 | 0.693 | 0.596 | 0.587 | 0.711 | 0.556 | 1.000 | 0.695 | 0.387 | 0.587 | -0.635 | -0.556 | 0.283 | 0.627 | 0.024 | 0.356 | 0.278 |
| goal9 | 0.781 | 0.569 | 0.881 | 0.773 | 0.645 | 0.811 | 0.740 | 0.695 | 1.000 | 0.462 | 0.753 | -0.857 | -0.760 | 0.189 | 0.819 | 0.074 | 0.460 | 0.353 |
| goal10 | 0.497 | 0.240 | 0.456 | 0.309 | 0.098 | 0.366 | 0.323 | 0.387 | 0.462 | 1.000 | 0.281 | -0.496 | -0.469 | 0.215 | 0.519 | -0.030 | 0.262 | 0.142 |
| goal11 | 0.727 | 0.463 | 0.828 | 0.758 | 0.690 | 0.766 | 0.793 | 0.587 | 0.753 | 0.281 | 1.000 | -0.696 | -0.576 | 0.089 | 0.764 | 0.037 | 0.345 | 0.328 |
| goal12 | -0.648 | -0.353 | -0.789 | -0.655 | -0.653 | -0.727 | -0.654 | -0.635 | -0.857 | -0.496 | -0.696 | 1.000 | 0.876 | -0.316 | -0.825 | 0.013 | -0.466 | -0.292 |
| goal13 | -0.553 | -0.284 | -0.669 | -0.558 | -0.564 | -0.583 | -0.494 | -0.556 | -0.760 | -0.469 | -0.576 | 0.876 | 1.000 | -0.205 | -0.682 | -0.018 | -0.364 | -0.166 |
| goal15 | 0.099 | 0.122 | 0.152 | 0.058 | 0.203 | 0.262 | 0.123 | 0.283 | 0.189 | 0.215 | 0.089 | -0.316 | -0.205 | 1.000 | 0.303 | -0.068 | 0.169 | 0.223 |
| goal16 | 0.714 | 0.451 | 0.825 | 0.674 | 0.628 | 0.729 | 0.697 | 0.627 | 0.819 | 0.519 | 0.764 | -0.825 | -0.682 | 0.303 | 1.000 | -0.023 | 0.425 | 0.316 |
| stringency | 0.056 | 0.088 | 0.040 | 0.113 | 0.060 | 0.069 | 0.055 | 0.024 | 0.074 | -0.030 | 0.037 | 0.013 | -0.018 | -0.068 | -0.023 | 1.000 | 0.041 | 0.336 |
| cases_per_million | 0.341 | 0.206 | 0.412 | 0.349 | 0.330 | 0.389 | 0.340 | 0.356 | 0.460 | 0.262 | 0.345 | -0.466 | -0.364 | 0.169 | 0.425 | 0.041 | 1.000 | 0.416 |
| deaths_per_million | 0.361 | 0.242 | 0.373 | 0.339 | 0.261 | 0.398 | 0.374 | 0.278 | 0.353 | 0.142 | 0.328 | -0.292 | -0.166 | 0.223 | 0.316 | 0.336 | 0.416 | 1.000 |
Code
cor_melted <- as.data.frame(as.table(correlation_matrix_Covid))
names(cor_melted) <- c("Variable1", "Variable2", "Correlation")
ggplot(data = cor_melted, aes(Variable1, Variable2, fill = Correlation)) +
geom_tile() +
scale_fill_gradient2(low = "blue", high = "red", mid = "white",
midpoint = 0, limit = c(-1, 1), space = "Lab",
name = "Correlation") +
theme_minimal() +
theme(axis.text.x = element_text(angle = 45, vjust = 1, size = 8, hjust = 1),
axis.text.y = element_text(size = 8)) +
coord_fixed() +
labs(x = '', y = '',
title = 'Correlation between COVID and the SDG goals')Same conclusion, really weird.
Here we want to analyse the correlation between conflicts deaths and the SDG goals only for the Middle East & North Africa, Sub-Saharan Africa, South Asia, Latin America & the Caribbean and Eastern Europe regions.
Code
# Filter data for specific regions
selected_regions <- c("Middle East & North Africa", "Sub-Saharan Africa", "South Asia", "Latin America & the Caribbean", "Eastern Europe")
conflicts_selected <- Q3.3[Q3.3$region %in% selected_regions, ]
# Select relevant columns for the correlation analysis
relevant_columns <- c("goal1", "goal2", "goal3", "goal4", "goal5", "goal6", "goal7", "goal8", "goal9", "goal10", "goal11", "goal12", "goal13", "goal15", "goal16", "sum_deaths")
# Compute correlation matrix for the selected regions
correlation_matrix_Conflicts_Deaths <- cor(conflicts_selected[, relevant_columns], use = "complete.obs")
# View the correlation matrix
kable(correlation_matrix_Conflicts_Deaths)| goal1 | goal2 | goal3 | goal4 | goal5 | goal6 | goal7 | goal8 | goal9 | goal10 | goal11 | goal12 | goal13 | goal15 | goal16 | sum_deaths | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| goal1 | 1.000 | 0.476 | 0.910 | 0.791 | 0.406 | 0.799 | 0.865 | 0.546 | 0.723 | 0.272 | 0.783 | -0.730 | -0.594 | 0.039 | 0.613 | -0.095 |
| goal2 | 0.476 | 1.000 | 0.544 | 0.531 | 0.540 | 0.638 | 0.531 | 0.571 | 0.530 | 0.102 | 0.475 | -0.376 | -0.322 | 0.154 | 0.430 | -0.173 |
| goal3 | 0.910 | 0.544 | 1.000 | 0.814 | 0.507 | 0.832 | 0.876 | 0.596 | 0.768 | 0.223 | 0.828 | -0.745 | -0.587 | 0.014 | 0.666 | -0.117 |
| goal4 | 0.791 | 0.531 | 0.814 | 1.000 | 0.645 | 0.748 | 0.808 | 0.536 | 0.696 | 0.089 | 0.768 | -0.667 | -0.533 | 0.007 | 0.496 | -0.101 |
| goal5 | 0.406 | 0.540 | 0.507 | 0.645 | 1.000 | 0.587 | 0.539 | 0.454 | 0.516 | -0.178 | 0.620 | -0.464 | -0.351 | 0.191 | 0.384 | -0.162 |
| goal6 | 0.799 | 0.638 | 0.832 | 0.748 | 0.587 | 1.000 | 0.812 | 0.670 | 0.734 | 0.137 | 0.788 | -0.711 | -0.529 | 0.187 | 0.599 | -0.166 |
| goal7 | 0.865 | 0.531 | 0.876 | 0.808 | 0.539 | 0.812 | 1.000 | 0.539 | 0.720 | 0.152 | 0.841 | -0.704 | -0.531 | 0.039 | 0.566 | -0.094 |
| goal8 | 0.546 | 0.571 | 0.596 | 0.536 | 0.454 | 0.670 | 0.539 | 1.000 | 0.609 | 0.209 | 0.542 | -0.519 | -0.389 | 0.181 | 0.462 | -0.102 |
| goal9 | 0.723 | 0.530 | 0.768 | 0.696 | 0.516 | 0.734 | 0.720 | 0.609 | 1.000 | 0.300 | 0.698 | -0.759 | -0.689 | 0.137 | 0.591 | -0.077 |
| goal10 | 0.272 | 0.102 | 0.223 | 0.089 | -0.178 | 0.137 | 0.152 | 0.209 | 0.300 | 1.000 | 0.035 | -0.297 | -0.299 | 0.118 | 0.275 | 0.078 |
| goal11 | 0.783 | 0.475 | 0.828 | 0.768 | 0.620 | 0.788 | 0.841 | 0.542 | 0.698 | 0.035 | 1.000 | -0.729 | -0.570 | 0.031 | 0.656 | -0.155 |
| goal12 | -0.730 | -0.376 | -0.745 | -0.667 | -0.464 | -0.711 | -0.704 | -0.519 | -0.759 | -0.297 | -0.729 | 1.000 | 0.865 | -0.170 | -0.666 | 0.122 |
| goal13 | -0.594 | -0.322 | -0.587 | -0.533 | -0.351 | -0.529 | -0.531 | -0.389 | -0.689 | -0.299 | -0.570 | 0.865 | 1.000 | -0.150 | -0.493 | 0.079 |
| goal15 | 0.039 | 0.154 | 0.014 | 0.007 | 0.191 | 0.187 | 0.039 | 0.181 | 0.137 | 0.118 | 0.031 | -0.170 | -0.150 | 1.000 | 0.191 | -0.063 |
| goal16 | 0.613 | 0.430 | 0.666 | 0.496 | 0.384 | 0.599 | 0.566 | 0.462 | 0.591 | 0.275 | 0.656 | -0.666 | -0.493 | 0.191 | 1.000 | -0.162 |
| sum_deaths | -0.095 | -0.173 | -0.117 | -0.101 | -0.162 | -0.166 | -0.094 | -0.102 | -0.077 | 0.078 | -0.155 | 0.122 | 0.079 | -0.063 | -0.162 | 1.000 |
Code
# Melt the correlation matrix for ggplot2
cor_melted <- as.data.frame(as.table(correlation_matrix_Conflicts_Deaths))
names(cor_melted) <- c("Variable1", "Variable2", "Correlation")
# Create the heatmap
ggplot(data = cor_melted, aes(Variable1, Variable2, fill = Correlation)) +
geom_tile() +
scale_fill_gradient2(low = "blue", high = "red", mid = "white",
midpoint = 0, limit = c(-1, 1), space = "Lab",
name = "Correlation") +
theme_minimal() +
theme(axis.text.x = element_text(angle = 45, vjust = 1, size = 8, hjust = 1),
axis.text.y = element_text(size = 8)) +
coord_fixed() +
labs(x = '', y = '',
title = 'Correlation between Conflicts deaths and the SDG goals')Finally, we want to analyse the correlation between conflicts affected population and the SDG goals only for the Middle East & North Africa, Sub-Saharan Africa, South Asia, Latin America & the Caribbean, Eastern Europe regions and Caucasus and Central Asia.
Code
# Filter data for specific regions (pop_affected)
selected_regions <- c("Middle East & North Africa", "Sub-Saharan Africa", "South Asia", "Latin America & the Caribbean", "Eastern Europe","Caucasus and Central Asia")
conflicts_selected <- Q3.3[Q3.3$region %in% selected_regions, ]
# Select relevant columns for the correlation analysis
relevant_columns <- c("goal1", "goal2", "goal3", "goal4", "goal5", "goal6", "goal7", "goal8", "goal9", "goal10", "goal11", "goal12", "goal13", "goal15", "goal16", "pop_affected")
# Compute correlation matrix for the selected regions
correlation_matrix_Conflicts_Pop_Affected <- cor(conflicts_selected[, relevant_columns], use = "complete.obs")
# View the correlation matrix
kable(correlation_matrix_Conflicts_Pop_Affected)| goal1 | goal2 | goal3 | goal4 | goal5 | goal6 | goal7 | goal8 | goal9 | goal10 | goal11 | goal12 | goal13 | goal15 | goal16 | pop_affected | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| goal1 | 1.000 | 0.476 | 0.910 | 0.791 | 0.406 | 0.799 | 0.865 | 0.546 | 0.723 | 0.272 | 0.783 | -0.730 | -0.594 | 0.039 | 0.613 | -0.066 |
| goal2 | 0.476 | 1.000 | 0.544 | 0.531 | 0.540 | 0.638 | 0.531 | 0.571 | 0.530 | 0.102 | 0.475 | -0.376 | -0.322 | 0.154 | 0.430 | -0.083 |
| goal3 | 0.910 | 0.544 | 1.000 | 0.814 | 0.507 | 0.832 | 0.876 | 0.596 | 0.768 | 0.223 | 0.828 | -0.745 | -0.587 | 0.014 | 0.666 | -0.058 |
| goal4 | 0.791 | 0.531 | 0.814 | 1.000 | 0.645 | 0.748 | 0.808 | 0.536 | 0.696 | 0.089 | 0.768 | -0.667 | -0.533 | 0.007 | 0.496 | -0.030 |
| goal5 | 0.406 | 0.540 | 0.507 | 0.645 | 1.000 | 0.587 | 0.539 | 0.454 | 0.516 | -0.178 | 0.620 | -0.464 | -0.351 | 0.191 | 0.384 | -0.152 |
| goal6 | 0.799 | 0.638 | 0.832 | 0.748 | 0.587 | 1.000 | 0.812 | 0.670 | 0.734 | 0.137 | 0.788 | -0.711 | -0.529 | 0.187 | 0.599 | -0.106 |
| goal7 | 0.865 | 0.531 | 0.876 | 0.808 | 0.539 | 0.812 | 1.000 | 0.539 | 0.720 | 0.152 | 0.841 | -0.704 | -0.531 | 0.039 | 0.566 | -0.071 |
| goal8 | 0.546 | 0.571 | 0.596 | 0.536 | 0.454 | 0.670 | 0.539 | 1.000 | 0.609 | 0.209 | 0.542 | -0.519 | -0.389 | 0.181 | 0.462 | -0.099 |
| goal9 | 0.723 | 0.530 | 0.768 | 0.696 | 0.516 | 0.734 | 0.720 | 0.609 | 1.000 | 0.300 | 0.698 | -0.759 | -0.689 | 0.137 | 0.591 | 0.000 |
| goal10 | 0.272 | 0.102 | 0.223 | 0.089 | -0.178 | 0.137 | 0.152 | 0.209 | 0.300 | 1.000 | 0.035 | -0.297 | -0.299 | 0.118 | 0.275 | 0.074 |
| goal11 | 0.783 | 0.475 | 0.828 | 0.768 | 0.620 | 0.788 | 0.841 | 0.542 | 0.698 | 0.035 | 1.000 | -0.729 | -0.570 | 0.031 | 0.656 | -0.103 |
| goal12 | -0.730 | -0.376 | -0.745 | -0.667 | -0.464 | -0.711 | -0.704 | -0.519 | -0.759 | -0.297 | -0.729 | 1.000 | 0.865 | -0.170 | -0.666 | 0.107 |
| goal13 | -0.594 | -0.322 | -0.587 | -0.533 | -0.351 | -0.529 | -0.531 | -0.389 | -0.689 | -0.299 | -0.570 | 0.865 | 1.000 | -0.150 | -0.493 | 0.021 |
| goal15 | 0.039 | 0.154 | 0.014 | 0.007 | 0.191 | 0.187 | 0.039 | 0.181 | 0.137 | 0.118 | 0.031 | -0.170 | -0.150 | 1.000 | 0.191 | -0.108 |
| goal16 | 0.613 | 0.430 | 0.666 | 0.496 | 0.384 | 0.599 | 0.566 | 0.462 | 0.591 | 0.275 | 0.656 | -0.666 | -0.493 | 0.191 | 1.000 | -0.099 |
| pop_affected | -0.066 | -0.083 | -0.058 | -0.030 | -0.152 | -0.106 | -0.071 | -0.099 | 0.000 | 0.074 | -0.103 | 0.107 | 0.021 | -0.108 | -0.099 | 1.000 |
Code
# Melt the correlation matrix for ggplot2
cor_melted <- as.data.frame(as.table(correlation_matrix_Conflicts_Pop_Affected))
names(cor_melted) <- c("Variable1", "Variable2", "Correlation")
# Create the heatmap
ggplot(data = cor_melted, aes(Variable1, Variable2, fill = Correlation)) +
geom_tile() +
scale_fill_gradient2(low = "blue", high = "red", mid = "white",
midpoint = 0, limit = c(-1, 1), space = "Lab",
name = "Correlation") +
theme_minimal() +
theme(axis.text.x = element_text(angle = 45, vjust = 1, size = 8, hjust = 1),
axis.text.y = element_text(size = 8)) +
coord_fixed() +
labs(x = '', y = '',
title = 'Correlation between Conflicts Affected Population and the SDG goals')3.4 Focus on relationship between SDGs
4 Analysis
4.1 Answers to the research questions
4.1.1 Focus on relationship between SDGs
How are the different SDGs linked? (We want to see if some SDGs are linked in the fact that a high score on one implies a high score on the other, and thus if we can make groups of SDGs that are comparable in that way).
Let’s explore how the different SDG are correlated together by creating a heatmap of the correlation between our variables. We added a script to check whether the correlations are significantly different from 0. First, let’s select the SDGs scores.
Code
sdg_scores <- Q4[, c('goal1', 'goal2', 'goal3', 'goal4', 'goal5', 'goal6',
'goal7', 'goal8', 'goal9', 'goal10', 'goal11', 'goal12',
'goal13', 'goal15', 'goal16', 'goal17')]We then, initialize the matrices and calculate the correlation, and p-values of each combination of SDGs scores
Code
cor_matrix <- matrix(nrow = ncol(sdg_scores), ncol = ncol(sdg_scores))
p_matrix <- matrix(nrow = ncol(sdg_scores), ncol = ncol(sdg_scores))
rownames(cor_matrix) <- colnames(sdg_scores)
rownames(p_matrix) <- colnames(sdg_scores)
colnames(cor_matrix) <- colnames(sdg_scores)
colnames(p_matrix) <- colnames(sdg_scores)
# Calculate correlation and p-values
for (i in 1:ncol(sdg_scores)) {
for (j in 1:ncol(sdg_scores)) {
test_result <- cor.test(sdg_scores[, i], sdg_scores[, j])
cor_matrix[i, j] <- test_result$estimate
p_matrix[i, j] <- test_result$p.value}}We then reshape our data to be able to use the ggplot2 package to create our heatmap.
Code
melted_cor_matrix <-
melt(cor_matrix)
melted_p_matrix <-
melt(matrix(as.vector(p_matrix), nrow = ncol(sdg_scores)))
plot_data <- # Combine the datasets
cbind(melted_cor_matrix, p_value = melted_p_matrix$value)
ggplot(plot_data, aes(Var1, Var2, fill = value)) +
geom_tile() +
geom_text(aes(label = sprintf("%.2f", value), color = p_value < 0.05),
vjust = 1) +
scale_fill_gradient2(low = "blue", high = "red", mid = "white",
midpoint = 0, limit = c(-1,1), space = "Lab",
name="Pearson\nCorrelation") +
scale_color_manual(values = c("black", "yellow")) + # black when significant, yellow if not
theme_minimal() +
theme(axis.text.x = element_text(angle = 45, hjust = 1),
axis.text.y = element_text(angle = 45, hjust = 1),
legend.position = "none") +
labs(x = 'SDG Goals', y = 'SDG Goals',
title = 'Correlation Matrix with Significance Indicator')Note that as said previously, we assessed the correlations to ascertain if they substantially deviated from zero, setting the significance level at an alpha of 5%. To aid in visualization, we marked any correlations that did not meet this level of significance with a yellow highlight in our graphical representation. The absence of yellow markings on our plot suggests that all Sustainable Development Goal (SDG) scores demonstrate a statistically significant correlation.
We can have a look at the shape of the corelation between the SDGs with the plot function.
Code
plot(sdg_scores)4.2 Different methods considered
4.3 Competing approaches
4.4 Justifications
5 Conclusion
- Take home message
- Limitations
- Future work?